Equilibrium Points of Single-Layered Neural Networks with Feedback and Applications in the Analysis of Text Documents

Author(s):  
Alexander Shmelev ◽  
Vyacheslav Avdeychik
2021 ◽  
Vol 3 (4) ◽  
pp. 922-945
Author(s):  
Shaw-Hwa Lo ◽  
Yiqiao Yin

Text classification is a fundamental language task in Natural Language Processing. A variety of sequential models are capable of making good predictions, yet there is a lack of connection between language semantics and prediction results. This paper proposes a novel influence score (I-score), a greedy search algorithm, called Backward Dropping Algorithm (BDA), and a novel feature engineering technique called the “dagger technique”. First, the paper proposes to use the novel influence score (I-score) to detect and search for the important language semantics in text documents that are useful for making good predictions in text classification tasks. Next, a greedy search algorithm, called the Backward Dropping Algorithm, is proposed to handle long-term dependencies in the dataset. Moreover, the paper proposes a novel engineering technique called the “dagger technique” that fully preserves the relationship between the explanatory variable and the response variable. The proposed techniques can be further generalized into any feed-forward Artificial Neural Networks (ANNs) and Convolutional Neural Networks (CNNs), and any neural network. A real-world application on the Internet Movie Database (IMDB) is used and the proposed methods are applied to improve prediction performance with an 81% error reduction compared to other popular peers if I-score and “dagger technique” are not implemented.


2009 ◽  
Vol 21 (1) ◽  
pp. 101-120 ◽  
Author(s):  
Dequan Jin ◽  
Jigen Peng

In this letter, using methods proposed by E. Kaslik, St. Balint, and their colleagues, we develop a new method, expansion approach, for estimating the attraction domain of asymptotically stable equilibrium points of Hopfield-type neural networks. We prove theoretically and demonstrate numerically that the proposed approach is feasible and efficient. The numerical results that obtained in the application examples, including the network system considered by E. Kaslik, L. Brăescu, and St. Balint, indicate that the proposed approach is able to achieve better attraction domain estimation.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Yanke Du ◽  
Yanlu Li ◽  
Rui Xu

A general class of Cohen-Grossberg neural networks with time-varying delays, distributed delays, and discontinuous activation functions is investigated. By partitioning the state space, employing analysis approach and Cauchy convergence principle, sufficient conditions are established for the existence and locally exponential stability of multiple equilibrium points and periodic orbits, which ensure thatn-dimensional Cohen-Grossberg neural networks withk-level discontinuous activation functions can haveknequilibrium points orknperiodic orbits. Finally, several examples are given to illustrate the feasibility of the obtained results.


2012 ◽  
Vol 89 ◽  
pp. 106-113 ◽  
Author(s):  
Qi Han ◽  
Xiaofeng Liao ◽  
Tengfei Weng ◽  
Chuandong Li ◽  
Hongyu Huang

2020 ◽  
Vol 17 (9) ◽  
pp. 3867-3872
Author(s):  
Aniv Chakravarty ◽  
Jagadish S. Kallimani

Text summarization is an active field of research with a goal to provide short and meaningful gists from large amount of text documents. Extractive text summarization methods have been extensively studied where text is extracted from the documents to build summaries. There are various type of multi document ranging from different formats to domains and topics. With the recent advancement in technology and use of neural networks for text generation, interest for research in abstractive text summarization has increased significantly. The use of graph based methods which handle semantic information has shown significant results. When given a set of documents of English text files, we make use of abstractive method and predicate argument structures to retrieve necessary text information and pass it through a neural network for text generation. Recurrent neural networks are a subtype of recursive neural networks which try to predict the next sequence based on the current state and considering the information from previous states. The use of neural networks allows generation of summaries for long text sentences as well. This paper implements a semantic based filtering approach using a similarity matrix while keeping all stop-words. The similarity is calculated using semantic concepts and Jiang–Conrath similarity and making use of a recurrent neural network with an attention mechanism to generate summary. ROUGE score is used for measuring accuracy, precision and recall scores.


2000 ◽  
Vol 12 (2) ◽  
pp. 451-472 ◽  
Author(s):  
Fation Sevrani ◽  
Kennichi Abe

In this article we present techniques for designing associative memories to be implemented by a class of synchronous discrete-time neural networks based on a generalization of the brain-state-in-a-box neural model. First, we address the local qualitative properties and global qualitative aspects of the class of neural networks considered. Our approach to the stability analysis of the equilibrium points of the network gives insight into the extent of the domain of attraction for the patterns to be stored as asymptotically stable equilibrium points and is useful in the analysis of the retrieval performance of the network and also for design purposes. By making use of the analysis results as constraints, the design for associative memory is performed by solving a constraint optimization problem whereby each of the stored patterns is guaranteed a substantial domain of attraction. The performance of the designed network is illustrated by means of three specific examples.


1994 ◽  
Vol 05 (03) ◽  
pp. 165-180 ◽  
Author(s):  
SUBRAMANIA I. SUDHARSANAN ◽  
MALUR K. SUNDARESHAN

Complexity of implementation has been a major difficulty in the development of gradient descent learning algorithms for dynamical neural networks with feedback and recurrent connections. Some insights from the stability properties of the equilibrium points of the network, which suggest an appropriate tailoring of the sigmoidal nonlinear functions, can however be utilized in obtaining simplified learning rules, as demonstrated in this paper. An analytical proof of convergence of the learning scheme under specific conditions is given and some upper bounds on the adaptation parameters for an efficient implementation of the training procedure are developed. The performance features of the learning algorithm are illustrated by applying it to two problems of importance, viz., design of associative memories and nonlinear input-output mapping. For the first application, a systematic procedure is given for training a network to store multiple memory vectors as its stable equilibrium points, whereas for the second application, specific training rules are developed for a three-layer network architecture comprising a dynamical hidden layer for the identification of nonlinear input-output maps. A comparison with the performance of a standard backpropagation network provides an illustration of the capabilities of the present network architecture and the learning algorithm.


Sign in / Sign up

Export Citation Format

Share Document