Letter to the Editor: A DISCRETE-TIME LAGRANGIAN NETWORK FOR SOLVING CONSTRAINED QUADRATIC PROGRAMS

2000 ◽  
Vol 10 (04) ◽  
pp. 261-265 ◽  
Author(s):  
WAI SUM TANG ◽  
JUN WANG

A discrete-time recurrent neural network which is called the discrete-time Lagrangian network is proposed in this letter for solving convex quadratic programs. It is developed based on the classical Lagrange optimization method and solves quadratic programs without using any penalty parameter. The condition for the neural network to globally converge to the optimal solution of the quadratic program is given. Simulation results are presented to illustrate its performance.

2008 ◽  
Vol 20 (5) ◽  
pp. 1366-1383 ◽  
Author(s):  
Qingshan Liu ◽  
Jun Wang

A one-layer recurrent neural network with a discontinuous activation function is proposed for linear programming. The number of neurons in the neural network is equal to that of decision variables in the linear programming problem. It is proven that the neural network with a sufficiently high gain is globally convergent to the optimal solution. Its application to linear assignment is discussed to demonstrate the utility of the neural network. Several simulation examples are given to show the effectiveness and characteristics of the neural network.


2012 ◽  
Vol 2012 ◽  
pp. 1-18 ◽  
Author(s):  
Quan-Ju Zhang ◽  
Xiao Qing Lu

This paper presents a novel recurrent time continuous neural network model which performs nonlinear fractional optimization subject to interval constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with interval constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point being chosen in the feasible interval region. Simulation results are given to demonstrate further the global convergence and good performance of the proposing neural network for nonlinear fractional programming problems with interval constraints.


2014 ◽  
Vol 8 (1) ◽  
pp. 723-728 ◽  
Author(s):  
Chenhao Niu ◽  
Xiaomin Xu ◽  
Yan Lu ◽  
Mian Xing

Short time load forecasting is essential for daily planning and operation of electric power system. It is the important basis for economic dispatching, scheduling and safe operation. Neural network, which has strong nonlinear fitting capability, is widely used in the load forecasting and obtains good prediction effect in nonlinear chaotic time series forecasting. However, the neural network is easy to fall in local optimum, unable to find the global optimal solution. This paper will integrate the traditional optimization algorithm and propose the hybrid intelligent optimization algorithm based on particle swarm optimization algorithm and ant colony optimization algorithm (ACO-PSO) to improve the generalization of the neural network. In the empirical analysis, we select electricity consumption in a certain area for validation. Compared with the traditional BP neutral network and statistical methods, the experimental results demonstrate that the performance of the improved model with more precise results and stronger generalization ability is much better than the traditional methods.


2013 ◽  
Vol 860-863 ◽  
pp. 2791-2795
Author(s):  
Qian Xiao ◽  
Yu Shan Jiang ◽  
Ru Zheng Cui

Aiming at the large calculation workload of adaptive algorithm in adaptive filter based on wavelet transform, affecting the filtering speed, a wavelet-based neural network adaptive filter is constructed in this paper. Since the neural network has the ability of distributed storage and fast self-evolution, use Hopfield neural network to implement adaptive filter LMS algorithm in this filter so as to improve the speed of operation. The simulation results prove that, the new filter can achieve rapid real-time denoising.


2014 ◽  
Vol 538 ◽  
pp. 167-170
Author(s):  
Hui Zhong Mao ◽  
Chen Qiao ◽  
Wen Feng Jing ◽  
Xi Chen ◽  
Jin Qin Mao

This paper presents the global convergence theory of the discrete-time uniform pseudo projection anti-monotone network with the quasi–symmetric matrix, which removes the connection matrix constraints. The theory widens the range of applications of the discrete–time uniform pseudo projection anti–monotone network and is valid for many kinds of discrete recurrent neural network models.


Author(s):  
Raheleh Jafari ◽  
Sina Razvarz ◽  
Alexander Gegov ◽  
Satyam Paul

In order to model the fuzzy nonlinear systems, fuzzy equations with Z-number coefficients are used in this chapter. The modeling of fuzzy nonlinear systems is to obtain the Z-number coefficients of fuzzy equations. In this work, the neural network approach is used for finding the coefficients of fuzzy equations. Some examples with applications in mechanics are given. The simulation results demonstrate that the proposed neural network is effective for obtaining the Z-number coefficients of fuzzy equations.


2018 ◽  
Vol 24 (3) ◽  
pp. 467-489 ◽  
Author(s):  
MARC TANTI ◽  
ALBERT GATT ◽  
KENNETH P. CAMILLERI

AbstractWhen a recurrent neural network (RNN) language model is used for caption generation, the image information can be fed to the neural network either by directly incorporating it in the RNN – conditioning the language model by ‘injecting’ image features – or in a layer following the RNN – conditioning the language model by ‘merging’ image features. While both options are attested in the literature, there is as yet no systematic comparison between the two. In this paper, we empirically show that it is not especially detrimental to performance whether one architecture is used or another. The merge architecture does have practical advantages, as conditioning by merging allows the RNN’s hidden state vector to shrink in size by up to four times. Our results suggest that the visual and linguistic modalities for caption generation need not be jointly encoded by the RNN as that yields large, memory-intensive models with few tangible advantages in performance; rather, the multimodal integration should be delayed to a subsequent stage.


Sign in / Sign up

Export Citation Format

Share Document