Self-Tuning for Fuzzy Rule Generation Based upon Fuzzy Singleton-type Reasoning Method

Author(s):  
Yan Shi ◽  
◽  
Masaharu Mizumoto ◽  

Using fuzzy singleton-type reasoning method, we propose a self-tuning method for fuzzy rule generation. We give a neurofuzzy learning algorithm for tuning fuzzy rules under fuzzy singleton-type reasoning method, then roughly design initial tuning parameters of fuzzy rules based on a fuzzy clustering algorithm before learning a fuzzy model. This should reduce learning time and fuzzy rules generated by our approach are reasonable and suitable for the identified model. We demonstrate our proposal’s efficiency by identifying nonlinear functions.

2012 ◽  
Vol 152-154 ◽  
pp. 1133-1137
Author(s):  
Jian Hu Jiang ◽  
Chao Wu ◽  
Gang Zhang

In this paper, fuzzy self-tuning controller is introduced first. The fuzzy model is built according to the experience of PID parameter tuning with fuzzy set theory. Parameter tuning is achieved by use of fuzzy ratiocination and decision according to actual response, which is applied for control towards robot. Mathematical model of two-link robot has been built as well as its geometric and dynamical equations through coordinate transformation and matrix operation. Finally, fuzzy PD controller with self-tuning method is applied to realize control towards robots. Simulation in Matlab has been carried out whose result shows that the control method proposed in this paper has better performance than the traditional ones.


Author(s):  
Hiroshi Kawakami ◽  
◽  
Osamu Katai ◽  
Tadataka Konishi ◽  

This paper proposes a new method of Q-learning for the case where the states (conditions) and actions of systems are assumed to be continuous. The components of Q-tables are interpolated by fuzzy inference. The initial set of fuzzy rules is made of all combinations of conditions and actions relevant to the problem. Each rule is then associated with a value by which the Q-values of condition/action pairs are estimated. The values are revised by the Q-learning algorithm so as to make the fuzzy rule system effective. Although this framework may require a huge number of the initial fuzzy rules, we will show that considerable reduction of the number can be done by adopting what we call Condition Reduced Fuzzy Rules (CRFR). The antecedent part of CRFR consists of all actions and the selected conditions, and its consequent is set to be its Q-value. Finally, experimental results show that controllers with CRFRs perform equally well to the system with the most detailed fuzzy control rules, while the total number of parameters that have to be revised through the whole learning process is considerably reduced, and the number of the revised parameters at each step of learning increased.


Author(s):  
A. GONZÁLEZ ◽  
R. PÉREZ

A very important problem associated with the use of learning algorithms consists of fixing the correct assignment of the initial domains for the predictive variables. In the fuzzy case, this problem is equivalent of define the fuzzy labels for each variable. In this work, we propose the inclusion in a learning algorithm, called SLAVE, of a particular kind of linguistic hedges as a way to modify the intial semantic of the labels. These linguistic hedges allow us both to learn and to tune fuzzy rules.


Author(s):  
Min-Soeng Kim ◽  
◽  
Sun-Gi Hong ◽  
Ju-Jang Lee

Fuzzy logic controllers consist of if-then fuzzy rules generally adopted from a priori expert knowledge. However, it is not always easy or cheap to obtain expert knowledge. Q-learning can be used to acquire knowledge from experiences even without the model of the environment. The conventional Q-learning algorithm cannot deal with continuous states and continuous actions. However, the fuzzy logic controller can inherently receive continuous input values and generate continuous output values. Thus, in this paper, the Q-learning algorithm is incorporated into the fuzzy logic controller to compensate for each method’s disadvantages. Modified fuzzy rules are proposed in order to incorporate the Q-learning algorithm into the fuzzy logic controller. This combination results in the fuzzy logic controller that can learn through experience. Since Q-values in Q-learning are functional values of the state and the action, we cannot directly apply the conventional Q-learning algorithm to the proposed fuzzy logic controller. Interpolation is used in each modified fuzzy rule so that the Q-value is updatable.


Author(s):  
Chong Tak Yaw ◽  
Shen Young Wong ◽  
Keem Sian Yap

Extreme Learning Machine (ELM) is widely known as an effective learning algorithm than the conventional learning methods from the point of learning speed as well as generalization. In traditional fuzzy inference method which was the "if-then" rules, all the input and output objects were assigned to antecedent and consequent component respectively. However, a major dilemma was that the fuzzy rules' number kept increasing until the system and arrangement of the rules became complicated. Therefore, the single input rule modules connected type fuzzy inference (SIRM) method where consociated the output of the fuzzy rules modules significantly. In this paper, we put forward a novel single input rule modules based on extreme learning machine (denoted as SIRM-ELM) for solving data regression problems. In this hybrid model, the concept of SIRM is applied as hidden neurons of ELM and each of them represents a single input fuzzy rules. Hence, the number of fuzzy rule and the number of hidden neuron of ELM are the same. The effectiveness of proposed SIRM-ELM model is verified using sigmoid activation functions based on several benchmark datasets and a NOx emission of power generation plant.  Experimental results illustrate that our proposed SIRM-ELM model is capable of achieving small root mean square error, i.e., 0.027448 for prediction of NO<sub>x</sub> emission.


1996 ◽  
Vol 8 (4) ◽  
pp. 757-767 ◽  
Author(s):  
Yan SHI ◽  
Masaharu MIZUMOTO ◽  
Naoyoshi YUBAZAKI ◽  
Masayuki OTANI

Sign in / Sign up

Export Citation Format

Share Document