scholarly journals An Advanced Conjugate Gradient Training Algorithm Based on a Modified Secant Equation

2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Ioannis E. Livieris ◽  
Panagiotis Pintelas

Conjugate gradient methods constitute excellent neural network training methods characterized by their simplicity, numerical efficiency, and their very low memory requirements. In this paper, we propose a conjugate gradient neural network training algorithm which guarantees sufficient descent using any line search, avoiding thereby the usually inefficient restarts. Moreover, it achieves a high-order accuracy in approximating the second-order curvature information of the error surface by utilizing the modified secant condition proposed by Li et al. (2007). Under mild conditions, we establish that the proposed method is globally convergent for general functions under the strong Wolfe conditions. Experimental results provide evidence that our proposed method is preferable and in general superior to the classical conjugate gradient methods and has a potential to significantly enhance the computational efficiency and robustness of the training process.

2012 ◽  
Vol 21 (01) ◽  
pp. 1250009 ◽  
Author(s):  
IOANNIS E. LIVIERIS ◽  
PANAGIOTIS PINTELAS

Conjugate gradient methods constitute excellent neural network training methods which are characterized by their simplicity and their very low memory requirements. In this paper, we propose a new spectral conjugate gradient method which guarantees sufficient descent using any line search, avoiding thereby the usually inefficient restarts. Moreover, we establish the global convergence of our proposed method under some assumptions. Experimental results provide evidence that our proposed method is preferable and in general superior to the classical conjugate gradient methods in terms of efficiency and robustness.


Energies ◽  
2020 ◽  
Vol 13 (19) ◽  
pp. 5164
Author(s):  
Chin-Hsiang Cheng ◽  
Yu-Ting Lin

The present study develops a novel optimization method for designing a Stirling engine by combining a variable-step simplified conjugate gradient method (VSCGM) and a neural network training algorithm. As compared with existing gradient-based methods, like the conjugate gradient method (CGM) and simplified conjugate gradient method (SCGM), the VSCGM method is a further modified version presented in this study which allows the convergence speed to be greatly accelerated while the form of the objective function can still be defined flexibly. Through the automatic adjustment of the variable step size, the optimal design is reached more efficiently and accurately. Therefore, the VSCGM appears to be a potential and alternative tool in a variety of engineering applications. In this study, optimization of a low-temperature-differential gamma-type Stirling engine was attempted as a test case. The optimizer was trained by the neural network algorithm based on the training data provided from three-dimensional computational fluid dynamic (CFD) computation. The optimal design of the influential parameters of the Stirling engine is yielded efficiently. Results show that the indicated work and thermal efficiency are increased with the present approach by 102.93% and 5.24%, respectively. Robustness of the VSCGM is tested by giving different sets of initial guesses.


2014 ◽  
Vol 10 (S306) ◽  
pp. 279-287 ◽  
Author(s):  
Michael Hobson ◽  
Philip Graff ◽  
Farhan Feroz ◽  
Anthony Lasenby

AbstractMachine-learning methods may be used to perform many tasks required in the analysis of astronomical data, including: data description and interpretation, pattern recognition, prediction, classification, compression, inference and many more. An intuitive and well-established approach to machine learning is the use of artificial neural networks (NNs), which consist of a group of interconnected nodes, each of which processes information that it receives and then passes this product on to other nodes via weighted connections. In particular, I discuss the first public release of the generic neural network training algorithm, calledSkyNet, and demonstrate its application to astronomical problems focusing on its use in the BAMBI package for accelerated Bayesian inference in cosmology, and the identification of gamma-ray bursters. TheSkyNetand BAMBI packages, which are fully parallelised using MPI, are available athttp://www.mrao.cam.ac.uk/software/.


2019 ◽  
Vol 32 (9) ◽  
pp. 4177-4185 ◽  
Author(s):  
Ioannis E. Livieris ◽  
Panagiotis Pintelas

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 1943-1951 ◽  
Author(s):  
Xiaohui Yang ◽  
Kaiwei Xu ◽  
Shaoping Xu ◽  
Peter Xiaoping Liu

Sign in / Sign up

Export Citation Format

Share Document