scholarly journals An Enhanced Training Algorithm for Multilayer Neural Networks Based on Reference Output of Hidden Layer

1999 ◽  
Vol 8 (3) ◽  
pp. 218-225 ◽  
Author(s):  
Y. Li ◽  
A. B. Rad ◽  
W. Peng
1995 ◽  
Vol 03 (04) ◽  
pp. 1177-1191 ◽  
Author(s):  
HÉLÈNE PAUGAM-MOISY

This article is a survey of recent advances on multilayer neural networks. The first section is a short summary on multilayer neural networks, their history, their architecture and their learning rule, the well-known back-propagation. In the following section, several theorems are cited, which present one-hidden-layer neural networks as universal approximators. The next section points out that two hidden layers are often required for exactly realizing d-dimensional dichotomies. Defining the frontier between one-hidden-layer and two-hidden-layer networks is still an open problem. Several bounds on the size of a multilayer network which learns from examples are presented and we enhance the fact that, even if all can be done with only one hidden layer, more often, things can be done better with two or more hidden layers. Finally, this assertion 'is supported by the behaviour of multilayer neural networks in two applications: prediction of pollution and odor recognition modelling.


2011 ◽  
Vol 74 (16) ◽  
pp. 2491-2501 ◽  
Author(s):  
Zhihong Man ◽  
Kevin Lee ◽  
Dianhui Wang ◽  
Zhenwei Cao ◽  
Chunyan Miao

1994 ◽  
Vol 05 (03) ◽  
pp. 217-228 ◽  
Author(s):  
ANDREAS WENDEMUTH

General analytic expressions are given for the lower and upper storage capacity bounds of multilayer neural networks which have variable weights between input layer and first hidden layer, and a fixed output function implemented between first hidden and output layer. The special cases of committee and parity machines as well as the limiting cases of networks with minimum and maximum storage capacities are discussed. The results are compared with replica calculations and simulations. An explanation is given as to why the latter have a storage capacity just slightly above the lower limit and how this can be improved.


Author(s):  
Paola Andrea Sánchez-Sánchez ◽  
José Rafael García-González ◽  
Leidy Haidy Perez Coronell

The objective of this chapter is to analyze the problem surrounding the task of prognosis with neural networks and the factors that affect the construction of the model, and that often lead to inconsistent results, emphasizing the problems of selecting the training algorithm, the number of neurons in the hidden layer, and input variables. The methodology is to analyze the forecast of time series, due to the growing need for tools that facilitate decision-making, especially in series that, given their characteristics of noise and variability, infer nonlinear dynamics. Neural networks have emerged as an attractive approach to the representation of such behaviors due to their adaptability, generalization, and learning capabilities. Practical evidence shows that the Delta Delta and RProp training methods exhibit different behaviors than expected.


Author(s):  
A.М. Заяц ◽  
С.П. Хабаров

Рассматривается процедура выбора структуры и параметров нейронной сети для классификации набора данных, известного как Ирисы Фишера, который включает в себя данные о 150 экземплярах растений трех различных видов. Предложен подход к решению данной задачи без использования дополнительных программных средств и мощных нейросетевых пакетов с использованием только средств стандартного браузера ОС. Это потребовало реализации ряда процедур на JavaScript c их подгрузкой в разработанную интерфейсную HTML-страницу. Исследование большого числа различных структур многослойных нейронных сетей, обучаемых на основе алгоритма обратного распространения ошибки, позволило выбрать для тестового набора данных структуру нейронной сети всего с одним скрытым слоем из трех нейронов. Это существенно упрощает реализацию классификатора Ирисов Фишера, позволяя его оформить в виде загружаемой с сервера HTML-страницы. The procedure for selecting the structure and parameters of the neural network for the classification of a data set known as Iris Fisher, which includes data on 150 plant specimens of three different species, is considered. An approach to solving this problem without using additional software and powerful neural network packages using only the tools of the standard OS browser is proposed. This required the implementation of a number of JavaScript procedures with their loading into the developed HTML interface page. The study of a large number of different structures of multilayer neural networks, trained on the basis of the back-propagation error algorithm, made it possible to choose the structure of a neural network with only one hidden layer of three neurons for a test dataset. This greatly simplifies the implementation of the Fisher Iris classifier, allowing it to be formatted as an HTML page downloaded from the server.


Sign in / Sign up

Export Citation Format

Share Document