Synchronization of neural activity and information processing

1998 ◽  
Vol 21 (6) ◽  
pp. 833-833 ◽  
Author(s):  
Roman Borisyuk ◽  
Galina Borisyuk ◽  
Yakov Kazanovich

Synchronization of neural activity in oscillatory neural networks is a general principle of information processing in the brain at both preattentional and attentional levels. This is confirmed by a model of attention based on an oscillatory neural network with a central element and models of feature binding and working memory based on multi-frequency oscillations.

1994 ◽  
Vol 6 (4) ◽  
pp. 658-667 ◽  
Author(s):  
Yukio Hayashi

One of the advantages of oscillatory neural networks is their dynamic links among related features; the links are based on input-dependent synchronized oscillations. This paper investigates the relations between synchronous/asynchronous oscillations and the connection architectures of an oscillatory neural network with two excitatory-inhibitory pair oscillators. Through numerical analysis, we show synchronous and asynchronous connection types over a wide parameter space for two different inputs and one connection parameter. The results are not only consistent with the classification of synchronous/asynchronous connection types in König's model (1991), but also offer a useful guideline on how to construct a network with local connections for a segmentation task.


1990 ◽  
Vol 2 (4) ◽  
pp. 219-219
Author(s):  
Mitsuo Wada ◽  

It is well known that robots are being skillfully applied and with favorable performance in a variety of fields, for use in the Japanese manufacturing industry in particular, thanks to progress in robot technology. Today, robots are expected to accommodate men and in the near future be utilized in the field of home life in compliance with human beings. Pessimistically speaking, however, it is impossible to deny that conventional robots, such as teaching playback robots (which men must operate directly), are not able to play roles in the future as expected, so that the development of a new control system which is able to overcome conventional systems in performance ability is indispensable. In other words, flexible control systems by which robots are able to behave autonomously, with minimum human interference is urgently required. We believe that the following three concepts are indispensable for a robot to be equipped with flexibility. a) Manipulators/hands and lggs / wheek with human flexibility. b) Control of flexible and intelligent motions for control in manipulation/handling and locomotion; c) Flexible intelligence and a sense of judgement which permits the robot to execute motions autonomously, adapting itself to the requirements of the human environment. Solving these problems will require investigation into information processing, a study into the function of the brain and central nervous system of human and other living bodies. Thus the information processing theory about neural networks which simulate the functions of the brain has progressed rapidly to activate R & D on the application of motion control and speech processing which have made use of the conventional Neumann computer difficult to handle. Neural networks have the capacity of parallel distributed processing and self-organization as well as learning capacity. Its theory has provided an effective basis for materialization of flexible robots. In the field of level b. and c. mentioned earlier, the neural network theory comprises a large potential to be applied to robots, so that attention is being focused on it. Nevertheless, information processing by neural network is not omnipotent for solving such problems. Presently, it is difficult for a neural network to solve problems which require complex calculations in robot control; for instance, such controls that take force and acceleration into account. Control of flexible robots which mobilize whole arms will require parallel processing of data obtained from many sensors and to control numerous degrees of motion. Therefore, it has become increasingly important for problem solving to combine such problems inherent to robots with parallel processing, self-organization and learning ability of neural networks. From this point of view, therefore, further promotion of R & D on the application technology of neural network for robots is important. These efforts will produce a new neural network-theory for robots and eventually permit autonomous motion. This special issue compilied articles related to applications of neural network to robots, which were produced in the above mentioned environment, from a review on neuromorfhic control, through dynamic system control, optimal trajectory, planning of motion for handling, manipulator locomotion and travelling, to problems in application systems. We hope these articles help our readers understand the present state of Japanese R & D and the application of neural network for robots, as well as new subjects possible for progress in the future. Finally, we gratefully acknowledge Prof. Toshio Fukuda (who contributed a review) and other contributors on their latest achievements.


2010 ◽  
Vol 61 (2) ◽  
pp. 120-124 ◽  
Author(s):  
Ladislav Zjavka

Generalization of Patterns by Identification with Polynomial Neural Network Artificial neural networks (ANN) in general classify patterns according to their relationship, they are responding to related patterns with a similar output. Polynomial neural networks (PNN) are capable of organizing themselves in response to some features (relations) of the data. Polynomial neural network for dependence of variables identification (D-PNN) describes a functional dependence of input variables (not entire patterns). It approximates a hyper-surface of this function with multi-parametric particular polynomials forming its functional output as a generalization of input patterns. This new type of neural network is based on GMDH polynomial neural network and was designed by author. D-PNN operates in a way closer to the brain learning as the ANN does. The ANN is in principle a simplified form of the PNN, where the combinations of input variables are missing.


2013 ◽  
Vol 7 (1) ◽  
pp. 49-62 ◽  
Author(s):  
Vijaykumar Sutariya ◽  
Anastasia Groshev ◽  
Prabodh Sadana ◽  
Deepak Bhatia ◽  
Yashwant Pathak

Artificial neural networks (ANNs) technology models the pattern recognition capabilities of the neural networks of the brain. Similarly to a single neuron in the brain, artificial neuron unit receives inputs from many external sources, processes them, and makes decisions. Interestingly, ANN simulates the biological nervous system and draws on analogues of adaptive biological neurons. ANNs do not require rigidly structured experimental designs and can map functions using historical or incomplete data, which makes them a powerful tool for simulation of various non-linear systems.ANNs have many applications in various fields, including engineering, psychology, medicinal chemistry and pharmaceutical research. Because of their capacity for making predictions, pattern recognition, and modeling, ANNs have been very useful in many aspects of pharmaceutical research including modeling of the brain neural network, analytical data analysis, drug modeling, protein structure and function, dosage optimization and manufacturing, pharmacokinetics and pharmacodynamics modeling, and in vitro in vivo correlations. This review discusses the applications of ANNs in drug delivery and pharmacological research.


Author(s):  
Thomas P. Trappenberg

This chapter discusses the basic operation of an artificial neural network which is the major paradigm of deep learning. The name derives from an analogy to a biological brain. The discussion begins by outlining the basic operations of neurons in the brain and how these operations are abstracted by simple neuron models. It then builds networks of artificial neurons that constitute much of the recent success of AI. The focus of this chapter is on using such techniques, with subsequent consideration of their theoretical embedding.


2012 ◽  
Vol 263-266 ◽  
pp. 3374-3377
Author(s):  
Hua Liang Wu ◽  
Zhen Dong Mu ◽  
Jian Feng Hu

In the application of the classification, neural networks are often used as a classification tool, In this paper, neural network is introduced on motor imagery EEG analysis, the first EEG Hjort conversion, and then the brain electrical signal is converted into the frequency domain, Finally, the fisher distance for feature extraction in the EEG analysis, identification of the study sample was 97 86% recognition rate is 80% of the test sample.


2012 ◽  
Vol 24 (2) ◽  
pp. 523-540 ◽  
Author(s):  
Dimitrije Marković ◽  
Claudius Gros

A massively recurrent neural network responds on one side to input stimuli and is autonomously active, on the other side, in the absence of sensory inputs. Stimuli and information processing depend crucially on the qualia of the autonomous-state dynamics of the ongoing neural activity. This default neural activity may be dynamically structured in time and space, showing regular, synchronized, bursting, or chaotic activity patterns. We study the influence of nonsynaptic plasticity on the default dynamical state of recurrent neural networks. The nonsynaptic adaption considered acts on intrinsic neural parameters, such as the threshold and the gain, and is driven by the optimization of the information entropy. We observe, in the presence of the intrinsic adaptation processes, three distinct and globally attracting dynamical regimes: a regular synchronized, an overall chaotic, and an intermittent bursting regime. The intermittent bursting regime is characterized by intervals of regular flows, which are quite insensitive to external stimuli, interceded by chaotic bursts that respond sensitively to input signals. We discuss these findings in the context of self-organized information processing and critical brain dynamics.


2015 ◽  
Vol 125 ◽  
pp. 211-223 ◽  
Author(s):  
Céline Charroud ◽  
Jason Steffener ◽  
Emmanuelle Le Bars ◽  
Jérémy Deverdun ◽  
Alain Bonafe ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document