Modelling Stable Alluvial River Profiles Using Back Propagation-Based Multilayer Neural Networks

Author(s):  
Hossein Bonakdari ◽  
Azadeh Gholami ◽  
Bahram Gharabaghi
1995 ◽  
Vol 03 (04) ◽  
pp. 1177-1191 ◽  
Author(s):  
HÉLÈNE PAUGAM-MOISY

This article is a survey of recent advances on multilayer neural networks. The first section is a short summary on multilayer neural networks, their history, their architecture and their learning rule, the well-known back-propagation. In the following section, several theorems are cited, which present one-hidden-layer neural networks as universal approximators. The next section points out that two hidden layers are often required for exactly realizing d-dimensional dichotomies. Defining the frontier between one-hidden-layer and two-hidden-layer networks is still an open problem. Several bounds on the size of a multilayer network which learns from examples are presented and we enhance the fact that, even if all can be done with only one hidden layer, more often, things can be done better with two or more hidden layers. Finally, this assertion 'is supported by the behaviour of multilayer neural networks in two applications: prediction of pollution and odor recognition modelling.


10.28945/2931 ◽  
2005 ◽  
Author(s):  
Mohammed A. Otair ◽  
Walid A. Salameh

There are many successful applications of Backpropagation (BP) for training multilayer neural networks. However, it has many shortcomings. Learning often takes long time to converge, and it may fall into local minima. One of the possible remedies to escape from local minima is by using a very small learning rate, which slows down the learning process. The proposed algorithm presented in this study used for training depends on a multilayer neural network with a very small learning rate, especially when using a large training set size. It can be applied in a generic manner for any network size that uses a backpropgation algorithm through an optical time (seen time). The paper describes the proposed algorithm, and how it can improve the performance of back-propagation (BP). The feasibility of proposed algorithm is shown through out number of experiments on different network architectures.


1997 ◽  
Vol 08 (05n06) ◽  
pp. 509-515
Author(s):  
Yan Li ◽  
A. B. Rad

A new structure and training method for multilayer neural networks is presented. The proposed method is based on cascade training of subnetworks and optimizing weights layer by layer. The training procedure is completed in two steps. First, a subnetwork, m inputs and n outputs as the style of training samples, is trained using the training samples. Secondly the outputs of the subnetwork is taken as the inputs and the outputs of the training sample as the desired outputs, another subnetwork with n inputs and n outputs is trained. Finally the two trained subnetworks are connected and a trained multilayer neural networks is created. The numerical simulation results based on both linear least squares back-propagation (LSB) and traditional back-propagation (BP) algorithm have demonstrated the efficiency of the proposed method.


1999 ◽  
Vol 09 (03) ◽  
pp. 251-256 ◽  
Author(s):  
L.C. PEDROZA ◽  
C.E. PEDREIRA

This paper proposes a new methodology to approximate functions by incorporating a priori information. The relationship between the proposed scheme and multilayer neural networks is explored theoretically and numerically. This approach is particularly interesting for the very relevant class of limited spectrum functions. The number of free parameters is smaller if compared to Back-Propagation Algorithm opening the way for better generalization results.


Author(s):  
A.М. Заяц ◽  
С.П. Хабаров

Рассматривается процедура выбора структуры и параметров нейронной сети для классификации набора данных, известного как Ирисы Фишера, который включает в себя данные о 150 экземплярах растений трех различных видов. Предложен подход к решению данной задачи без использования дополнительных программных средств и мощных нейросетевых пакетов с использованием только средств стандартного браузера ОС. Это потребовало реализации ряда процедур на JavaScript c их подгрузкой в разработанную интерфейсную HTML-страницу. Исследование большого числа различных структур многослойных нейронных сетей, обучаемых на основе алгоритма обратного распространения ошибки, позволило выбрать для тестового набора данных структуру нейронной сети всего с одним скрытым слоем из трех нейронов. Это существенно упрощает реализацию классификатора Ирисов Фишера, позволяя его оформить в виде загружаемой с сервера HTML-страницы. The procedure for selecting the structure and parameters of the neural network for the classification of a data set known as Iris Fisher, which includes data on 150 plant specimens of three different species, is considered. An approach to solving this problem without using additional software and powerful neural network packages using only the tools of the standard OS browser is proposed. This required the implementation of a number of JavaScript procedures with their loading into the developed HTML interface page. The study of a large number of different structures of multilayer neural networks, trained on the basis of the back-propagation error algorithm, made it possible to choose the structure of a neural network with only one hidden layer of three neurons for a test dataset. This greatly simplifies the implementation of the Fisher Iris classifier, allowing it to be formatted as an HTML page downloaded from the server.


Author(s):  
Sherif S. Ishak ◽  
Haitham M. Al-Deek

Pattern recognition techniques such as artificial neural networks continue to offer potential solutions to many of the existing problems associated with freeway incident-detection algorithms. This study focuses on the application of Fuzzy ART neural networks to incident detection on freeways. Unlike back-propagation models, Fuzzy ART is capable of fast, stable learning of recognition categories. It is an incremental approach that has the potential for on-line implementation. Fuzzy ART is trained with traffic patterns that are represented by 30-s loop-detector data of occupancy, speed, or a combination of both. Traffic patterns observed at the incident time and location are mapped to a group of categories. Each incident category maps incidents with similar traffic pattern characteristics, which are affected by the type and severity of the incident and the prevailing traffic conditions. Detection rate and false alarm rate are used to measure the performance of the Fuzzy ART algorithm. To reduce the false alarm rate that results from occasional misclassification of traffic patterns, a persistence time period of 3 min was arbitrarily selected. The algorithm performance improves when the temporal size of traffic patterns increases from one to two 30-s periods for all traffic parameters. An interesting finding is that the speed patterns produced better results than did the occupancy patterns. However, when combined, occupancy–speed patterns produced the best results. When compared with California algorithms 7 and 8, the Fuzzy ART model produced better performance.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Florian Stelzer ◽  
André Röhm ◽  
Raul Vicente ◽  
Ingo Fischer ◽  
Serhiy Yanchuk

AbstractDeep neural networks are among the most widely applied machine learning tools showing outstanding performance in a broad range of tasks. We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops. This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals. The network states emerge in time as a temporal unfolding of the neuron’s dynamics. By adjusting the feedback-modulation within the loops, we adapt the network’s connection weights. These connection weights are determined via a back-propagation algorithm, where both the delay-induced and local network connections must be taken into account. Our approach can fully represent standard Deep Neural Networks (DNN), encompasses sparse DNNs, and extends the DNN concept toward dynamical systems implementations. The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.


Author(s):  
Cosimo Magazzino ◽  
Marco Mele

AbstractThis paper shows that the co-movement of public revenues in the European Monetary Union (EMU) is driven by an unobserved common factor. Our empirical analysis uses yearly data covering the period 1970–2014 for 12 selected EMU member countries. We have found that this common component has a significant impact on public revenues in the majority of the countries. We highlight this common pattern in a dynamic factor model (DFM). Since this factor is unobservable, it is difficult to agree on what it represents. We argue that the latent factor that emerges from the two different empirical approaches used might have a composite nature, being the result of both the more general convergence of the economic cycles of the countries in the area and the increasingly better tuned tax structure. However, the original aspect of our paper is the use of a back-propagation neural networks (BPNN)-DF model to test the results of the time-series. At the level of computer programming, the results obtained represent the first empirical demonstration of the latent factor’s presence.


Sign in / Sign up

Export Citation Format

Share Document