A Generalized Global Convergence Theory of Projection-Type Neural Networks for Optimization

Author(s):  
Rui Zhang ◽  
Zongben Xu
Mathematics ◽  
2021 ◽  
Vol 9 (13) ◽  
pp. 1551
Author(s):  
Bothina El-Sobky ◽  
Yousria Abo-Elnaga ◽  
Abd Allah A. Mousa ◽  
Mohamed A. El-Shorbagy

In this paper, a penalty method is used together with a barrier method to transform a constrained nonlinear programming problem into an unconstrained nonlinear programming problem. In the proposed approach, Newton’s method is applied to the barrier Karush–Kuhn–Tucker conditions. To ensure global convergence from any starting point, a trust-region globalization strategy is used. A global convergence theory of the penalty–barrier trust-region (PBTR) algorithm is studied under four standard assumptions. The PBTR has new features; it is simpler, has rapid convergerce, and is easy to implement. Numerical simulation was performed on some benchmark problems. The proposed algorithm was implemented to find the optimal design of a canal section for minimum water loss for a triangle cross-section application. The results are promising when compared with well-known algorithms.


2014 ◽  
Vol 538 ◽  
pp. 167-170
Author(s):  
Hui Zhong Mao ◽  
Chen Qiao ◽  
Wen Feng Jing ◽  
Xi Chen ◽  
Jin Qin Mao

This paper presents the global convergence theory of the discrete-time uniform pseudo projection anti-monotone network with the quasi–symmetric matrix, which removes the connection matrix constraints. The theory widens the range of applications of the discrete–time uniform pseudo projection anti–monotone network and is valid for many kinds of discrete recurrent neural network models.


2009 ◽  
Vol 10 (4) ◽  
pp. 2195-2206 ◽  
Author(s):  
Zhaohui Yuan ◽  
Lifen Yuan ◽  
Lihong Huang ◽  
Dewen Hu

Sign in / Sign up

Export Citation Format

Share Document