An improved Mamdani Fuzzy Neural Networks Based on PSO Algorithm and New Parameter Optimization

Author(s):  
Lei Meng ◽  
Shoulin Yin ◽  
Xinyuan Hu

As we all know, the parameter optimization of Mamdani model has a defect of easily falling into local optimum. To solve this problem, we propose a new algorithm by constructing Mamdani Fuzzy neural networks. This new scheme uses fuzzy clustering based on particle swarm optimization(PSO) algorithm to determine initial parameter of Mamdani Fuzzy neural networks. Then it adopts PSO algorithm to optimize model's parameters. At the end, we use gradient descent method to make a further optimization for parameters. Therefore, we can realize the automatic adjustment, modification and perfection under the fuzzy rule. The experimental results show that the new algorithm improves the approximation ability of Mamdani Fuzzy neural networks.

2014 ◽  
pp. 99-106
Author(s):  
Leonid Makhnist ◽  
Nikolaj Maniakov ◽  
Nikolaj Maniakov

Is proposed two new techniques for multilayer neural networks training. Its basic concept is based on the gradient descent method. For every methodic are showed formulas for calculation of the adaptive training steps. Presented matrix algorithmizations for all of these techniques are very helpful in its program realization.


2019 ◽  
Vol 9 (21) ◽  
pp. 4568
Author(s):  
Hyeyoung Park ◽  
Kwanyong Lee

Gradient descent method is an essential algorithm for learning of neural networks. Among diverse variations of gradient descent method that have been developed for accelerating learning speed, the natural gradient learning is based on the theory of information geometry on stochastic neuromanifold, and is known to have ideal convergence properties. Despite its theoretical advantages, the pure natural gradient has some limitations that prevent its practical usage. In order to get the explicit value of the natural gradient, it is required to know true probability distribution of input variables, and to calculate inverse of a matrix with the square size of the number of parameters. Though an adaptive estimation of the natural gradient has been proposed as a solution, it was originally developed for online learning mode, which is computationally inefficient for the learning of large data set. In this paper, we propose a novel adaptive natural gradient estimation for mini-batch learning mode, which is commonly adopted for big data analysis. For two representative stochastic neural network models, we present explicit rules of parameter updates and learning algorithm. Through experiments on three benchmark problems, we confirm that the proposed method has superior convergence properties to the conventional methods.


Author(s):  
Naoyoshi Yubazaki ◽  
◽  
Jianqiang Yi ◽  
Kaoru Hirota ◽  

A new fuzzy inference model, SIRMs (Single Input Rule Modules) Connected Fuzzy Inference Model, is proposed for plural input fuzzy control. For each input item, an importance degree is defined and single input fuzzy rule module is constructed. The importance degrees control the roles of the input items in systems. The model output is obtained by the summation of the products of the importance degree and the fuzzy inference result of each SIRM. The proposed model needs both very few rules and parameters, and the rules can be designed much easier. The new model is first applied to typical secondorder lag systems. The simulation results show that the proposed model can largely improve the control performance compared with that of the conventional fuzzy inference model. The tuning algorithm is then given based on the gradient descent method and used to adjust the parameters of the proposed model for identifying 4-input 1-output nonlinear functions. The identification results indicate that the proposed model also has the ability to identify nonlinear systems.


2012 ◽  
Vol 233 ◽  
pp. 409-415
Author(s):  
Zhong Jian Tang ◽  
Miao Song

Aimed at the problem that it is difficult to measure production rate of hydrocyanic acid directly. So the soft measurement model of production rate of hydrocyanic acid can be established based on neural networks according to interrelated measurable engineering signals. Before being application to engineering, the soft measurement model is trained by PSO algorithm instead of the fast gradient descent method; Simulations prove that the soft measurement model trained by PSO possesses better measuring accuracy and stronger generalization ability. This kind of soft measurement model can be applied to practical production engineering of hydrocyanic acid.


1998 ◽  
Vol 35 (02) ◽  
pp. 395-406 ◽  
Author(s):  
Jürgen Dippon

A stochastic gradient descent method is combined with a consistent auxiliary estimate to achieve global convergence of the recursion. Using step lengths converging to zero slower than 1/n and averaging the trajectories, yields the optimal convergence rate of 1/√n and the optimal variance of the asymptotic distribution. Possible applications can be found in maximum likelihood estimation, regression analysis, training of artificial neural networks, and stochastic optimization.


Sign in / Sign up

Export Citation Format

Share Document