Arbitrary-Order Iterative Learning Control Considering H∞ Synthesis

Author(s):  
Minghui Zheng ◽  
Cong Wang ◽  
Liting Sun ◽  
Masayoshi Tomizuka

Iterative learning control (ILC) is an effective technique to improve the tracking performance of systems through adjusting the feedforward control signal based on the memory data. It is critically important to design the learning filters in the ILC algorithm that assures the robust stability of the convergence of tracking errors from one iteration to next. The design procedure usually involves lots of tuning work especially in high-order ILC. To facilitate this procedure, this paper proposes an approach to design learning filters for an arbitrary-order ILC with guaranteed convergence and ease of tuning. The filter design problem is formulated into an H∞ optimal control problem. This approach is based on an infinite impulse response (IIR) system and conducted directly in iteration-frequency domain. Important characteristics of the proposed approach are explored and demonstrated on a simulated wafer scanning system.

Complexity ◽  
2018 ◽  
Vol 2018 ◽  
pp. 1-6
Author(s):  
Yun-Shan Wei ◽  
Qing-Yuan Xu

For linear discrete-time systems with randomly variable input trail length, a proportional- (P-) type iterative learning control (ILC) law is proposed. To tackle the randomly variable input trail length, a modified control input at the desirable trail length is introduced in the proposed ILC law. Under the assumption that the initial state fluctuates around the desired initial state with zero mean, the designed ILC scheme can drive the ILC tracking errors to zero at the desirable trail length in expectation sense. The designed ILC algorithm allows the trail length of control input which is different from system state and output at a specific iteration. In addition, the identical initial condition widely used in conventional ILC design is also mitigated. An example manifests the validity of the proposed ILC algorithm.


Author(s):  
Xinyi Ge ◽  
Jeffrey L. Stein ◽  
Tulga Ersal

This paper focuses on norm-optimal iterative learning control (NO-ILC) for single-input-single-output (SISO) linear time invariant (LTI) systems and presents an infinite time horizon approach for a frequency-dependent design of NO-ILC weighting filters. Because NO-ILC is a model-based learning algorithm, model uncertainty can degrade its performance; hence, ensuring robust monotonic convergence (RMC) against model uncertainty is important. This robustness, however, must be balanced against convergence speed (CS) and steady-state error (SSE). The weighting filter design approaches for NO-ILC in the literature provide limited design freedom to adjust this trade-off. Moreover, even though qualitative guidelines to adjust the trade-off exist, a quantitative characterization of the trade-off is not yet available. To address these two gaps, a frequency-dependent weighting filter design is proposed in this paper and the robustness, convergence speed, and steady-state error are analyzed in the frequency domain. An analytical expression characterizing the fundamental trade-off of NO-ILC with respect to robustness, convergence speed, and steady-state error at each frequency is presented. Compared to the state of the art, a frequency-dependent filter design gives increased freedom to adjust the trade-off between robustness, convergence speed, and steady-state error because it allows the design to meet different performance requirements at different frequencies. Simulation examples are given to confirm the analysis and demonstrate the utility of the developed filter design technique.


Author(s):  
Rahmat A. Shoureshi ◽  
Sunwook Lim ◽  
Christopher M. Aasted

This paper presents a reconfigurable control design technique that integrates a robust feedback and an iterative learning control (ILC) scheme. This technique is applied to develop vehicle control systems that are tolerant to failures due to malfunctions or damages. The design procedure includes solving the robust performance condition for a feedback controller through the use of μ-synthesis that also satisfies the convergence condition for the iterative learning control rule. The effectiveness of the proposed approach is verified by simulation experiments using a radio-controlled (R/C) model airplane. The methods presented in this paper can be applied to design of global intelligent control systems to improve the operating characteristics of a vehicle and increase safety and reliability.


Author(s):  
Shuwen Yu ◽  
Masayoshi Tomizuka

Iterative learning control (ILC) is a feedforward control strategy used to improve the performance of a system that executes the same task repeatedly, but is incapable of compensating for non-repetitive disturbances. Thus a well-designed feedback controller needs to be used in combination with ILC. A robustness filter called the Q-filter is essential for the ILC system stability. The price to pay, however, is that the Q-filter makes it impossible for ILC to achieve perfect tracking of the repetitive reference or perfect cancellation of repetitive disturbances. To reduce error, it is effective to apply a pre-design feedforward control input in addition to ILC. In this paper, a simple P-type ILC is combined with an optimal feedback-feedforward control inspired by classic predictive control, so as to take advantages of each control strategy. It will be shown that the choice of the injection point of the learned ILC effort is crucial for a tradeoff between stability and performance. Therefore, the stability and performance analysis based on different injection points is studied. A systematic approach to the combined control scheme is also proposed. The combined control scheme is attractive due to its simplicity and promising performance. The effectiveness of the combined control scheme is verified by simulation results with a wafer scanner system.


Author(s):  
Benjamin Fine ◽  
Masayoshi Tomizuka

In trajectory tracking applications where a single trajectory is followed many times, Iterative Learning Control (ILC) is used to improve tracking performance by compensating for controller lag and disturbances that are repetitive across iterations. These tracking errors are not necessarily found throughout the iteration and may even be sufficiently learned after only a few iterations. Influenced by segmented and multirate control, this paper presents a new ILC algorithm which reduces how often the ILC input signal is updated as learning progresses. Portions of the signal where sufficient learning has occurred are divided and approximated as constant based on where the magnitude of the input is small and is slowly changing. Organized ILC is compared to the p-type ILC formulation and is shown to perform just as well as the full cycle learning. During sections of constant velocity, the organized ILC quickly compensates for the error as does the p-type ILC. In portions where tolerances are satisfied, the organized ILC begins partitioning and approximating the input signal and is shown to significantly reduce the number of times the input signal is updated. Measurement noise is also introduced and the RMS of the error signal for each iteration is compared. The organized ILC is shown to handle measurement noise significantly better than p-type ILC.


Author(s):  
Huimin Zhang ◽  
Ronghu Chi

Quantization is a significant technique in network control to save limited bandwidth. In this work, two new multi-lagged-input-based quantized iterative learning control (MLI-QILC) methods are proposed by using output quantization and error quantization, respectively. The multi-lagged-input iterative dynamic linearization method (MLI-IDL) is introduced to build a linear data model of nonlinear systems using additional control inputs in lagged time instants and multiple parameters where the condition of nonzero input change is not required any longer. The MLI-QILC is proposed by designing two new objective functions utilizing the quantized data of the system outputs and tracking errors, respectively. With rigorous analysis, it is shown that the proposed MLI-QILC with output quantization guarantees that the tracking error converges to a bound which is related to the quantization density and the bound of the desired trajectory. Furthermore, an asymptotic convergence can be achieved for the proposed MLI-QILC method with error quantization. The theoretical results are verified by simulations.


2014 ◽  
Vol 24 (3) ◽  
pp. 299-319 ◽  
Author(s):  
Kamen Delchev ◽  
George Boiadjiev ◽  
Haruhisa Kawasaki ◽  
Tetsuya Mouri

Abstract This paper deals with the improvement of the stability of sampled-data (SD) feedback control for nonlinear multiple-input multiple-output time varying systems, such as robotic manipulators, by incorporating an off-line model based nonlinear iterative learning controller. The proposed scheme of nonlinear iterative learning control (NILC) with SD feedback is applicable to a large class of robots because the sampled-data feedback is required for model based feedback controllers, especially for robotic manipulators with complicated dynamics (6 or 7 DOF, or more), while the feedforward control from the off-line iterative learning controller should be assumed as a continuous one. The robustness and convergence of the proposed NILC law with SD feedback is proven, and the derived sufficient condition for convergence is the same as the condition for a NILC with a continuous feedback control input. With respect to the presented NILC algorithm applied to a virtual PUMA 560 robot, simulation results are presented in order to verify convergence and applicability of the proposed learning controller with SD feedback controller attached


2018 ◽  
Vol 2018 ◽  
pp. 1-12 ◽  
Author(s):  
Xiongfeng Deng ◽  
Xiuxia Sun ◽  
Ri Liu ◽  
Shuguang Liu

We address the consensus control problem of time-varying delayed multiagent systems with directed communication topology. The model of each agent includes time-varying nonlinear dynamic and external disturbance, where the time-varying nonlinear term satisfies the global Lipschitz condition and the disturbance term satisfies norm-bounded condition. An improved control protocol, that is, a high-order iterative learning control scheme, is applied to cope with consensus tracking problem, where the desired trajectory is generated by a virtual leader agent. Through theoretical analysis, the improved control protocol guarantees that the tracking errors converge asymptotically to a sufficiently small interval under the given convergence conditions. Furthermore, the bounds of initial state difference and disturbances tend to zero; the bound of tracking errors also tends to zero. In the end, some cases are provided to illustrate the effectiveness of the theoretical analysis.


Author(s):  
P. R. Ouyang ◽  
Pong-in Pipatpaibul

Iterative Learning Control (ILC) is a technique of tracking control aiming at improving tracking performance for systems that work in a repetitive mode. ILC is a simple and effective control and can progressively reduce tracking errors and improve system performance from iteration to iteration. In this paper, we first classify the ILC schemes into three categories: offline learning scheme, online learning scheme, and online-offline learning scheme. In each scheme, P-type, D-type, PD-type, and switching gain learning control are discussed. The corresponding convergence conditions for each type of ILCs are presented. Then, different ILCs are applied to control a general nonlinear system with noise and disturbance. After that, various ILC schemes are tested under different test conditions to compare the effectiveness and robustness. It is demonstrated that the online-offline type ILCs can obtain the best tracking performance, and the switching gain learning control can provide the fastest convergence speed.


Sign in / Sign up

Export Citation Format

Share Document