ON NIL: THE SOFTWARE CONSTRUCTOR OF NEURAL NETWORKS

1996 ◽  
Vol 06 (04) ◽  
pp. 575-582 ◽  
Author(s):  
HAVA T. SIEGELMANN

Analog recurrent neural networks have attracted much attention lately as powerful tools of automatic learning. However, they are not as popular in industry as should be justified by their usefulness. The lack of any programming tool for networks. and their vague internal representation, leave the networks for the use of experts only. We propose a way to make the neural networks friendly to users by formally defining a high level language, called Neural Information Processing Programming Langage, which is rich enough to express any computer algorithm or rule-based system. We show how to compile a NIL program into a network which computes exactly as the original program and requires the same computation/convergence time and physical size. Allowing for a natural neural evolution after the construction, the neural networks are both capable of dynamical continuous learning and represent any given symbolic knowledge. Thus, the language along with its compiler may be thought of as the ultimate bridge from symbolic to analog computation.

2021 ◽  
Vol 7 (22) ◽  
pp. eabe7547
Author(s):  
Meenakshi Khosla ◽  
Gia H. Ngo ◽  
Keith Jamison ◽  
Amy Kuceyeski ◽  
Mert R. Sabuncu

Naturalistic stimuli, such as movies, activate a substantial portion of the human brain, invoking a response shared across individuals. Encoding models that predict neural responses to arbitrary stimuli can be very useful for studying brain function. However, existing models focus on limited aspects of naturalistic stimuli, ignoring the dynamic interactions of modalities in this inherently context-rich paradigm. Using movie-watching data from the Human Connectome Project, we build group-level models of neural activity that incorporate several inductive biases about neural information processing, including hierarchical processing, temporal assimilation, and auditory-visual interactions. We demonstrate how incorporating these biases leads to remarkable prediction performance across large areas of the cortex, beyond the sensory-specific cortices into multisensory sites and frontal cortex. Furthermore, we illustrate that encoding models learn high-level concepts that generalize to task-bound paradigms. Together, our findings underscore the potential of encoding models as powerful tools for studying brain function in ecologically valid conditions.


Author(s):  
Ansgar Rössig ◽  
Milena Petkovic

Abstract We consider the problem of verifying linear properties of neural networks. Despite their success in many classification and prediction tasks, neural networks may return unexpected results for certain inputs. This is highly problematic with respect to the application of neural networks for safety-critical tasks, e.g. in autonomous driving. We provide an overview of algorithmic approaches that aim to provide formal guarantees on the behaviour of neural networks. Moreover, we present new theoretical results with respect to the approximation of ReLU neural networks. On the other hand, we implement a solver for verification of ReLU neural networks which combines mixed integer programming with specialized branching and approximation techniques. To evaluate its performance, we conduct an extensive computational study. For that we use test instances based on the ACAS Xu system and the MNIST handwritten digit data set. The results indicate that our approach is very competitive with others, i.e. it outperforms the solvers of Bunel et al. (in: Bengio, Wallach, Larochelle, Grauman, Cesa-Bianchi, Garnett (eds) Advances in neural information processing systems (NIPS 2018), 2018) and Reluplex (Katz et al. in: Computer aided verification—29th international conference, CAV 2017, Heidelberg, Germany, July 24–28, 2017, Proceedings, 2017). In comparison to the solvers ReluVal (Wang et al. in: 27th USENIX security symposium (USENIX Security 18), USENIX Association, Baltimore, 2018a) and Neurify (Wang et al. in: 32nd Conference on neural information processing systems (NIPS), Montreal, 2018b), the number of necessary branchings is much smaller. Our solver is publicly available and able to solve the verification problem for instances which do not have independent bounds for each input neuron.


1993 ◽  
Vol 6 (2) ◽  
pp. 75-81 ◽  
Author(s):  
D. F. Benson

The neuroanatomical region that has most prominently altered with the advancing cognitive competency of the human is the prefrontal cortex, particularly the rostral extreme. While the prefrontal cortex does not appear to contain the neural networks that carry out cognitive activities, the management of these high level manipulations, so uniquely characteristic of the human, appears dependent upon the prefrontal cortex.


Athenea ◽  
2021 ◽  
Vol 2 (5) ◽  
pp. 29-34
Author(s):  
Alexander Caicedo ◽  
Anthony Caicedo

The era of the technological revolution increasingly encourages the development of technologies that facilitate in one way or another people's daily activities, thus generating a great advance in information processing. The purpose of this work is to implement a neural network that allows classifying the emotional states of a person based on the different human gestures. A database is used with information on students from the PUCE-E School of Computer Science and Engineering. Said information are images that express the gestures of the students and with which the comparative analysis with the input data is carried out. The environment in which this work converges proposes that the implementation of this project be carried out under the programming of a multilayer neuralnetwork. Multilayer feeding neural networks possess a number of properties that make them particularly suitable for complex pattern classification problems [8]. Back-Propagation [4], which is a backpropagation algorithm used in the Feedforward neural network, was taken into consideration to solve the classification of emotions. Keywords: Image processing, neural networks, gestures, back-propagation, feedforward, classification, emotions. References [1]S. Gangwar, S. Shukla, D. Arora. “Human Emotion Recognition by Using Pattern Recognition Network”, Journal of Engineering Research and Applications, Vol. 3, Issue 5, pp.535-539, 2013. [2]K. Rohit. “Back Propagation Neural Network based Emotion Recognition System”. International Journal of Engineering Trends and Technology (IJETT), Vol. 22, Nº 4, 2015. [3]S. Eishu, K. Ranju, S. Malika, “Speech Emotion Recognition using BFO and BPNN”, International Journal of Advances in Science and Technology (IJAST), ISSN2348-5426, Vol. 2 Issue 3, 2014. [4]A. Fiszelew, R. García-Martínez and T. de Buenos Aires. “Generación automática de redes neuronales con ajuste de parámetros basado en algoritmos genéticos”. Revista del Instituto Tecnológico de Buenos Aires, 26, 76-101, 2002. [5]Y. LeCun, B. Boser, J. Denker, D. Henderson, R. Howard, W. Hubbard, and L. Jackel. “Handwritten digit recognition with a back-propagation network”. In Advances in neural information processing systems. pp. 396-404, 1990. [6]G. Bebis and M. Georgiopoulos. “Feed-forward neural networks”. IEEE Potentials, 13(4), 27-31, 1994. [7]G. Huang, Q. Zhu and C. Siew. “Extreme learning machine: a new learning scheme of feedforward neural networks”. In Neural Networks, 2004. Proceedings. 2004 IEEE International Joint Conference. Vol. 2, pp. 985-990. IEEE, 2004. [8]D. Montana and L. Davis. “Training Feedforward Neural Networks Using Genetic Algorithms”. In IJCAI, Vol. 89, pp. 762-767, 1989. [9]I. Sutskever, O. Vinyals and Q. Le. “Sequence to sequence learning with neural networks”. In Advances in neural information processing systems. pp. 3104-3112, 2014. [10]J. Schmidhuber. “Deep learning in neural networks: An overview”. Neural networks, 61, 85-117, 2015. [11]R. Santos, M. Ruppb, S. Bonzi and A. Filetia, “Comparación entre redes neuronales feedforward de múltiples capas y una red de función radial para detectar y localizar fugas en tuberías que transportan gas”. Chem. Ing.Trans , 32 (1375), e1380, 2013.


2010 ◽  
Vol 22 (4) ◽  
pp. 1060-1085 ◽  
Author(s):  
Huijuan Fang ◽  
Yongji Wang ◽  
Jiping He

Recent investigation of cortical coding and computation indicates that temporal coding is probably a more biologically plausible scheme used by neurons than the rate coding used commonly in most published work. We propose and demonstrate in this letter that spiking neural networks (SNN), consisting of spiking neurons that propagate information by the timing of spikes, are a better alternative to the coding scheme based on spike frequency (histogram) alone. The SNN model analyzes cortical neural spike trains directly without losing temporal information for generating more reliable motor command for cortically controlled prosthetics. In this letter, we compared the temporal pattern classification result from the SNN approach with results generated from firing-rate-based approaches: conventional artificial neural networks, support vector machines, and linear regression. The results show that the SNN algorithm can achieve higher classification accuracy and identify the spiking activity related to movement control earlier than the other methods. Both are desirable characteristics for fast neural information processing and reliable control command pattern recognition for neuroprosthetic applications.


1994 ◽  
Vol 49 (4-5) ◽  
pp. 589-593
Author(s):  
Axel A. Hoff

Abstract Chaotic behaviour in biological neural networks is known from various experiments. The recent finding that it is possible to "control" chaotic systems may help answer the question whether chaos plays an active role in neutral information processing. It is demonstrated that a method for chaos control which was proposed by Pyragas can be used to let a chaotic system act like an autoassociative memory for time signal inputs. Specifically a combined chaotic and chaos control system can reconstruct unstable periodic orbits from incomplete information. The potential relevance of these findings for neural information processing is pointed out.


2001 ◽  
Vol 11 (03) ◽  
pp. 655-676 ◽  
Author(s):  
M. FORTI ◽  
A. TESI

In recent years, the standard Cellular Neural Networks (CNN's) introduced by Chua and Yang [1988] have been one of the most investigated paradigms for neural information processing. In a wide range of applications, the CNN's are required to be completely stable, i.e. each trajectory should converge toward some stationary state. However, a rigorous proof of complete stability, even in the simplest original setting of piecewise-linear (PWL) neuron activations and symmetric interconnections [Chua & Yang, 1988], is still lacking. This paper aims primarily at filling this gap, in order to give a sound analytical foundation to the CNN paradigm. To this end, a novel approach for studying complete stability is proposed. This is based on a fundamental limit theorem for the length of the CNN trajectories. The method differs substantially from the classic approach using LaSalle invariance principle, and permits to overcome difficulties encountered when using LaSalle approach to analyze complete stability of PWL CNN's. The main result obtained, is that a symmetric PWL CNN is completely stable for any choice of the network parameters, i.e. it possesses the Absolute Stability property of global pattern formation. This result is really general and shows that complete stability holds under hypotheses weaker than those considered in [Chua & Yang, 1988]. The result does not require, for example, that the CNN has binary stable equilibrium points only. It is valid even in degenerate situations where the CNN has infinite nonisolated equilibrium points. These features significantly extend the potential application fields of the standard CNN's.


Sign in / Sign up

Export Citation Format

Share Document