Computing Higher Order Derivatives in Universal Learning Networks
1998 ◽
Vol 2
(2)
◽
pp. 47-53
◽
Keyword(s):
This paper discusses higher order derivative computing for universal learning networks that form a super set of all kinds of neural networks. Two computing algorithms, backward and forward propagation, are proposed. Using a technique called "local description" expresses the proposed algorithms very simply. Numerical simulations demonstrate the usefulness of higher order derivatives in neural network training.
2014 ◽
Vol 10
(S306)
◽
pp. 279-287
◽
2017 ◽
Vol 109
(1)
◽
pp. 29-38
◽
Keyword(s):
2020 ◽
Vol 11
(4)
◽
pp. 62-85
2008 ◽
pp. 720-731
Keyword(s):
2012 ◽
Vol 263-266
◽
pp. 2102-2108
◽
2019 ◽
pp. 137-144
Keyword(s):