scholarly journals Enhanced Expressive Power and Fast Training of Neural Networks by Random Projections

2021 ◽  
Vol 2 (3) ◽  
pp. 532-550
Author(s):  
Jian-Feng Cai
Author(s):  
Dr. C. Arunabala ◽  
P. Jwalitha ◽  
Soniya Nuthalapati

The traditional text sentiment analysis method is mainly based on machine learning. However, its dependence on emotion dictionary construction and artificial design and extraction features makes the generalization ability limited. In contrast, depth models have more powerful expressive power, and can learn complex mapping functions from data to affective semantics better. In this paper, a Convolution Neural Networks (CNNs) model combined with SVM text sentiment analysis is proposed. The experimental results show that the proposed method improves the accuracy of text sentiment classification effectively compared with traditional CNN, and confirms the effectiveness of sentiment analysis based on CNNs and SVM


Author(s):  
George Dasoulas ◽  
Ludovic Dos Santos ◽  
Kevin Scaman ◽  
Aladin Virmaux

In this paper, we show that a simple coloring scheme can improve, both theoretically and empirically, the expressive power of Message Passing Neural Networks (MPNNs). More specifically, we introduce a graph neural network called Colored Local Iterative Procedure (CLIP) that uses colors to disambiguate identical node attributes, and show that this representation is a universal approximator of continuous functions on graphs with node attributes. Our method relies on separability, a key topological characteristic that allows to extend well-chosen neural networks into universal representations. Finally, we show experimentally that CLIP is capable of capturing structural characteristics that traditional MPNNs fail to distinguish, while being state-of-the-art on benchmark graph classification datasets.


2021 ◽  
Author(s):  
Michelangelo Diligenti ◽  
Francesco Giannini ◽  
Marco Gori ◽  
Marco Maggini ◽  
Giuseppe Marra

Neural-symbolic models bridge the gap between sub-symbolic and symbolic approaches, both of which have significant limitations. Sub-symbolic approaches, like neural networks, require a large amount of labeled data to be successful, whereas symbolic approaches, like logic reasoners, require a small amount of prior domain knowledge but do not easily scale to large collections of data. This chapter presents a general approach to integrate learning and reasoning that is based on the translation of the available prior knowledge into an undirected graphical model. Potentials on the graphical model are designed to accommodate dependencies among random variables by means of a set of trainable functions, like those computed by neural networks. The resulting neural-symbolic framework can effectively leverage the training data, when available, while exploiting high-level logic reasoning in a certain domain of discourse. Although exact inference is intractable within this model, different tractable models can be derived by making different assumptions. In particular, three models are presented in this chapter: Semantic-Based Regularization, Deep Logic Models and Relational Neural Machines. Semantic-Based Regularization is a scalable neural-symbolic model, that does not adapt the parameters of the reasoner, under the assumption that the provided prior knowledge is correct and must be exactly satisfied. Deep Logic Models preserve the scalability of Semantic-Based Regularization, while providing a flexible exploitation of logic knowledge by co-training the parameters of the reasoner during the learning procedure. Finally, Relational Neural Machines provide the fundamental advantages of perfectly replicating the effectiveness of training from supervised data of standard deep architectures, and of preserving the same generality and expressive power of Markov Logic Networks, when considering pure reasoning on symbolic data. The bonding between learning and reasoning is very general as any (deep) learner can be adopted, and any output structure expressed via First-Order Logic can be integrated. However, exact inference within a Relational Neural Machine is still intractable, and different factorizations are discussed to increase the scalability of the approach.


2015 ◽  
Vol 27 (12) ◽  
pp. 2623-2660 ◽  
Author(s):  
Tom J. Ameloot ◽  
Jan Van den Bussche

We study the expressive power of positive neural networks. The model uses positive connection weights and multiple input neurons. Different behaviors can be expressed by varying the connection weights. We show that in discrete time and in the absence of noise, the class of positive neural networks captures the so-called monotone-regular behaviors, which are based on regular languages. A finer picture emerges if one takes into account the delay by which a monotone-regular behavior is implemented. Each monotone-regular behavior can be implemented by a positive neural network with a delay of one time unit. Some monotone-regular behaviors can be implemented with zero delay. And, interestingly, some simple monotone-regular behaviors cannot be implemented with zero delay.


Sign in / Sign up

Export Citation Format

Share Document