A One-Layer Dual Recurrent Neural Network with a Heaviside Step Activation Function for Linear Programming with Its Linear Assignment Application

Author(s):  
Qingshan Liu ◽  
Jun Wang
Author(s):  
Zahra Sadat Mirzazadeh ◽  
Javad Banihassan ◽  
Amin Mansoori

Classic linear assignment method is a multi-criteria decision-making approach in which criteria are weighted and each rank is assigned to a choice. In this study, to abandon the requirement of calculating the weight of criteria and use decision attributes prioritizing and also to be able to assign a rank to more than one choice, a multi-objective linear programming (MOLP) method is suggested. The objective function of MOLP is defined for each attribute and MOLP is solved based on absolute priority and comprehensive criteria methods. For solving the linear programming problems we apply a recurrent neural network (RNN). Indeed, the Lyapunov stability of the model is proved. Results of comparing the proposed method with TOPSIS, VICOR, and MOORA methods which are the most common multi-criteria decision schemes show that the proposed approach is more compatible with these methods.


2008 ◽  
Vol 20 (5) ◽  
pp. 1366-1383 ◽  
Author(s):  
Qingshan Liu ◽  
Jun Wang

A one-layer recurrent neural network with a discontinuous activation function is proposed for linear programming. The number of neurons in the neural network is equal to that of decision variables in the linear programming problem. It is proven that the neural network with a sufficiently high gain is globally convergent to the optimal solution. Its application to linear assignment is discussed to demonstrate the utility of the neural network. Several simulation examples are given to show the effectiveness and characteristics of the neural network.


Author(s):  
George Mourgias-Alexandris ◽  
George Dabos ◽  
Nikolaos Passalis ◽  
Anastasios Tefas ◽  
Angelina Totovic ◽  
...  

1999 ◽  
Vol 11 (5) ◽  
pp. 1069-1077 ◽  
Author(s):  
Danilo P. Mandic ◽  
Jonathon A. Chambers

A relationship between the learning rate η in the learning algorithm, and the slope β in the nonlinear activation function, for a class of recurrent neural networks (RNNs) trained by the real-time recurrent learning algorithm is provided. It is shown that an arbitrary RNN can be obtained via the referent RNN, with some deterministic rules imposed on its weights and the learning rate. Such relationships reduce the number of degrees of freedom when solving the nonlinear optimization task of finding the optimal RNN parameters.


Complexity ◽  
2017 ◽  
Vol 2017 ◽  
pp. 1-7 ◽  
Author(s):  
Zhan Li ◽  
Hong Cheng ◽  
Hongliang Guo

This brief proposes a general framework of the nonlinear recurrent neural network for solving online the generalized linear matrix equation (GLME) with global convergence property. If the linear activation function is utilized, the neural state matrix of the nonlinear recurrent neural network can globally and exponentially converge to the unique theoretical solution of GLME. Additionally, as compared with the case of using the linear activation function, two specific types of nonlinear activation functions are proposed for the general nonlinear recurrent neural network model to achieve superior convergence. Illustrative examples are shown to demonstrate the efficacy of the general nonlinear recurrent neural network model and its superior convergence when activated by the aforementioned nonlinear activation functions.


2017 ◽  
Vol 47 (9) ◽  
pp. 1226-1241 ◽  
Author(s):  
Shukai DUAN ◽  
Tengteng Guo ◽  
Mengzhe ZHOU ◽  
Lidan WANG

Sign in / Sign up

Export Citation Format

Share Document