A Constructive Approach to Parsing with Neural Networks — The Hybrid Connectionist Parsing Method

Author(s):  
Christel Kemke
Author(s):  
MOHAMED ZINE EL ABIDINE SKHIRI ◽  
MOHAMED CHTOUROU

This paper investigates the applicability of the constructive approach proposed in Ref. 1 to wavelet neural networks (WNN). In fact, two incremental training algorithms will be presented. The first one, known as one pattern at a time (OPAT) approach, is the WNN version of the method applied in Ref. 1. The second approach however proposes a modified version of Ref. 1, known as one epoch at a time (OEAT) approach. In the OPAT approach, the input patterns are trained incrementally one by one until all patterns are presented. If the algorithm gets stuck in a local minimum and could not escape after a fixed number of successive attempts, then a new wavelet called also wavelon, will be recruited. In the OEAT approach however, all the input patterns are presented one epoch at a time. During one epoch, each pattern is trained only once until all patterns are trained. If the resulting overall error is reduced, then all the patterns will be retrained for one more epoch. Otherwise, a new wavelon will be recruited. To guarantee the convergence of the trained networks, an adaptive learning rate has been introduced using the discrete Lyapunov stability theorem.


2011 ◽  
Vol 403-408 ◽  
pp. 858-865
Author(s):  
Sudhir Kumar Sharma ◽  
Pravin Chandra

This paper presents cascading neural networks using adaptive sigmoidal function (CNNASF). The proposed algorithm emphasizes on architectural adaptation and functional adaptation during training. This algorithm is a constructive approach to building cascading architecture dynamically. The activation functions used at the hidden layers’ node are belonging to the well-defined sigmoidal class and adapted during training. The algorithm determines not only optimum number of hidden layers’ node, as also optimum sigmoidal function for them. One simple variant derived from CNNASF is where the sigmoid function used at the hidden layers’ node is fixed. Both the variants are compared to each other on five regression functions. Simulation results reveal that adaptive sigmoidal function presents several advantages over traditional fixed sigmoid function, resulting in increased flexibility, smoother learning, better convergence and better generalization performance.


2008 ◽  
Vol 71 (4-6) ◽  
pp. 626-630 ◽  
Author(s):  
Feilong Cao ◽  
Tingfan Xie ◽  
Zongben Xu

Sign in / Sign up

Export Citation Format

Share Document