Continuous Attractor Neural Networks

Author(s):  
Thomas P. Trappenberg

In this chapter a brief review is given of computational systems that are motivated by information processing in the brain, an area that is often called neurocomputing or artificial neural networks. While this is now a well studied and documented area, specific emphasis is given to a subclass of such models, called continuous attractor neural networks, which are beginning to emerge in a wide context of biologically inspired computing. The frequent appearance of such models in biologically motivated studies of brain functions gives some indication that this model might capture important information processing mechanisms used in the brain, either directly or indirectly. Most of this chapter is dedicated to an introduction to this basic model and some extensions that might be important for their application, either as a model of brain processing, or in technical applications. Direct technical applications are only emerging slowly, but some examples of promising directions are highlighted in this chapter.

Author(s):  
Eduardo D. Martin ◽  
Alfonso Araque

Artificial neural networks are a neurobiologically inspired paradigm that emulates the functioning of the brain. They are based on neuronal function, because neurons are recognized as the cellular elements responsible for the brain information processing. However, recent studies have demonstrated that astrocytes can signal to other astrocytes and can communicate reciprocally with neurons, which suggests a more active role of astrocytes in the nervous system physiology and fundamental brain functions. This novel vision of the glial role on brain function calls for a reexamination of our current vision of artificial neural networks, which should be expanded to consider artificial neuroglial networks. The neuroglial network concept has not been yet applied to the computational and artificial intelligent sciences. However, the implementation of artificial neuroglial networks by incorporating glial cells as part of artificial neural networks may be as fruitful and successful for artificial networks as they have been for biological networks.


Author(s):  
Eduardo D. Martin ◽  
Alfonso Araque

Artificial neural networks are a neurobiologically inspired paradigm that emulates the functioning of the brain. They are based on neuronal function, because neurons are recognized as the cellular elements responsible for the brain information processing. However, recent studies have demonstrated that astrocytes can signal to other astrocytes and can communicate reciprocally with neurons, which suggests a more active role of astrocytes in the nervous system physiology and fundamental brain functions. This novel vision of the glial role on brain function calls for a reexamination of our current vision of artificial neural networks, which should be expanded to consider artificial neuroglial networks. The neuroglial network concept has not been yet applied to the computational and artificial intelligent sciences. However, the implementation of artificial neuroglial networks by incorporating glial cells as part of artificial neural networks may be as fruitful and successful for artificial networks as they have been for biological networks.


2010 ◽  
Vol 61 (2) ◽  
pp. 120-124 ◽  
Author(s):  
Ladislav Zjavka

Generalization of Patterns by Identification with Polynomial Neural Network Artificial neural networks (ANN) in general classify patterns according to their relationship, they are responding to related patterns with a similar output. Polynomial neural networks (PNN) are capable of organizing themselves in response to some features (relations) of the data. Polynomial neural network for dependence of variables identification (D-PNN) describes a functional dependence of input variables (not entire patterns). It approximates a hyper-surface of this function with multi-parametric particular polynomials forming its functional output as a generalization of input patterns. This new type of neural network is based on GMDH polynomial neural network and was designed by author. D-PNN operates in a way closer to the brain learning as the ANN does. The ANN is in principle a simplified form of the PNN, where the combinations of input variables are missing.


Author(s):  
А.В. Милов

В статье представлены математические модели на основе искусственных нейронных сетей, используемые для управления индукционной пайкой. Обучение искусственных нейронных сетей производилось с использованием многокритериального генетического алгоритма FFGA. This article presents mathematical models based on artificial neural networks used to control induction soldering. The artificial neural networks were trained using the FFGA multicriteria genetic algorithm. The developed models allow to control induction soldering under conditions of incomplete or unreliable information, as well as under conditions of complete absence of information about the technological process.


Author(s):  
Yingxu Wang ◽  
Bernard Carlos Widrow ◽  
Bo Zhang ◽  
Witold Kinsner ◽  
Kenji Sugawara ◽  
...  

The contemporary wonder of sciences and engineering has recently refocused on the beginning point of: how the brain processes internal and external information autonomously and cognitively rather than imperatively like conventional computers. Cognitive Informatics (CI) is a transdisciplinary enquiry of computer science, information sciences, cognitive science, and intelligence science that investigates the internal information processing mechanisms and processes of the brain and natural intelligence, as well as their engineering applications in cognitive computing. This paper reports a set of eight position statements presented in the plenary panel of IEEE ICCI’10 on Cognitive Informatics and Its Future Development contributed from invited panelists who are part of the world’s renowned researchers and scholars in the field of cognitive informatics and cognitive computing.


Author(s):  
Pankaj Dadheech ◽  
Ankit Kumar ◽  
Vijander Singh ◽  
Linesh Raja ◽  
Ramesh C. Poonia

The networks acquire an altered move towards the difficulty solving skills rather than that of conventional computers. Artificial neural networks are comparatively crude electronic designs based on the neural structure of the brain. The chapter describes two different types of approaches to training, supervised and unsupervised, as well as the real-time applications of artificial neural networks. Based on the character of the application and the power of the internal data patterns we can normally foresee a network to train quite well. ANNs offers an analytical solution to conventional techniques that are often restricted by severe presumptions of normality, linearity, variable independence, etc. The chapter describes the necessities of items required for pest management through pheromones such as different types of pest are explained and also focused on use of pest control pheromones.


2020 ◽  
Vol 10 (6) ◽  
pp. 389
Author(s):  
David Sandor Kiss ◽  
Istvan Toth ◽  
Gergely Jocsak ◽  
Zoltan Barany ◽  
Tibor Bartha ◽  
...  

Anatomically, the brain is a symmetric structure. However, growing evidence suggests that certain higher brain functions are regulated by only one of the otherwise duplicated (and symmetric) brain halves. Hemispheric specialization correlates with phylogeny supporting intellectual evolution by providing an ergonomic way of brain processing. The more complex the task, the higher are the benefits of the functional lateralization (all higher functions show some degree of lateralized task sharing). Functional asymmetry has been broadly studied in several brain areas with mirrored halves, such as the telencephalon, hippocampus, etc. Despite its paired structure, the hypothalamus has been generally considered as a functionally unpaired unit, nonetheless the regulation of a vast number of strongly interrelated homeostatic processes are attributed to this relatively small brain region. In this review, we collected all available knowledge supporting the hypothesis that a functional lateralization of the hypothalamus exists. We collected and discussed findings from previous studies that have demonstrated lateralized hypothalamic control of the reproductive functions and energy expenditure. Also, sporadic data claims the existence of a partial functional asymmetry in the regulation of the circadian rhythm, body temperature and circulatory functions. This hitherto neglected data highlights the likely high-level ergonomics provided by such functional asymmetry.


Proceedings ◽  
2019 ◽  
Vol 21 (1) ◽  
pp. 46
Author(s):  
Francisco Cedron ◽  
Sara Alvarez-Gonzalez ◽  
Alejandro Pazos ◽  
Ana B. Porto-Pazos

The artificial neural networks used in a multitude of fields are achieving good results. However, these systems are inspired in the vision of classical neuroscience where neurons are the only elements that process information in the brain. Advances in neuroscience have shown that there is a type of glial cell called astrocytes that collaborate with neurons to process information. In this work, a connectionist system formed by neurons and artificial astrocytes is presented. The astrocytes can have different configurations to achieve a biologically more realistic behaviour. This work indicates that the use of different artificial astrocytes behaviours is beneficial.


2014 ◽  
Vol 24 (08) ◽  
pp. 1450029 ◽  
Author(s):  
JÉRÉMIE CABESSA ◽  
HAVA T. SIEGELMANN

We study the computational capabilities of a biologically inspired neural model where the synaptic weights, the connectivity pattern, and the number of neurons can evolve over time rather than stay static. Our study focuses on the mere concept of plasticity of the model so that the nature of the updates is assumed to be not constrained. In this context, we show that the so-called plastic recurrent neural networks (RNNs) are capable of the precise super-Turing computational power — as the static analog neural networks — irrespective of whether their synaptic weights are modeled by rational or real numbers, and moreover, irrespective of whether their patterns of plasticity are restricted to bi-valued updates or expressed by any other more general form of updating. Consequently, the incorporation of only bi-valued plastic capabilities in a basic model of RNNs suffices to break the Turing barrier and achieve the super-Turing level of computation. The consideration of more general mechanisms of architectural plasticity or of real synaptic weights does not further increase the capabilities of the networks. These results support the claim that the general mechanism of plasticity is crucially involved in the computational and dynamical capabilities of biological neural networks. They further show that the super-Turing level of computation reflects in a suitable way the capabilities of brain-like models of computation.


Sign in / Sign up

Export Citation Format

Share Document