scholarly journals Learning Distributed Selective Attention Strategies with the Sigma-if Neural Network

Author(s):  
Maciej Huk



Author(s):  
Maciej Huk

Backpropagation generalized delta rule for the selective attention Sigma-if artificial neural networkIn this paper the Sigma-if artificial neural network model is considered, which is a generalization of an MLP network with sigmoidal neurons. It was found to be a potentially universal tool for automatic creation of distributed classification and selective attention systems. To overcome the high nonlinearity of the aggregation function of Sigma-if neurons, the training process of the Sigma-if network combines an error backpropagation algorithm with the self-consistency paradigm widely used in physics. But for the same reason, the classical backpropagation delta rule for the MLP network cannot be used. The general equation for the backpropagation generalized delta rule for the Sigma-if neural network is derived and a selection of experimental results that confirm its usefulness are presented.





1998 ◽  
Vol 43 (10) ◽  
pp. 713-722 ◽  
Author(s):  
David Servan-Schreiber ◽  
Randy M. Bruno ◽  
Cameron S. Carter ◽  
Jonathan D. Cohen


Symmetry ◽  
2018 ◽  
Vol 10 (9) ◽  
pp. 357 ◽  
Author(s):  
Zhen Tan ◽  
Bo Li ◽  
Peixin Huang ◽  
Bin Ge ◽  
Weidong Xiao

Relation classification (RC) is an important task in information extraction from unstructured text. Recently, several neural methods based on various network architectures have been adopted for the task of RC. Among them, convolution neural network (CNN)-based models stand out due to their simple structure, low model complexity and “good” performance. Nevertheless, there are still at least two limitations associated with existing CNN-based RC models. First, when handling samples with long distances between entities, they fail to extract effective features, even obtaining disturbing ones from the clauses, which results in decreased accuracy. Second, existing RC models tend to produce inconsistent results when fed with forward and backward instances of an identical sample. Therefore, we present a novel CNN-based sentence encoder with selective attention by leveraging the shortest dependency paths, and devise a classification framework using symmetrical directional—forward and backward—instances via information fusion. Comprehensive experiments verify the superior performance of the proposed RC model over mainstream competitors without additional artificial features.



Sign in / Sign up

Export Citation Format

Share Document