Brain without mind: Computer simulation of neural networks with modifiable neuronal interactions

1985 ◽  
Vol 123 (4) ◽  
pp. 215-273 ◽  
Author(s):  
John W. Clark ◽  
Johann Rafelski ◽  
Jeffrey V. Winston
1996 ◽  
Vol 19 (2) ◽  
pp. 285-295 ◽  
Author(s):  
J. J. Wright ◽  
D. T. J. Liley

AbstractThere is some complementarity of models for the origin of the electroencephalogram (EEG) and neural network models for information storage in brainlike systems. From the EEG models of Freeman, of Nunez, and of the authors' group we argue that the wavelike processes revealed in the EEG exhibit linear and near-equilibrium dynamics at macroscopic scale, despite extremely nonlinear – probably chaotic – dynamics at microscopic scale. Simulations of cortical neuronal interactions at global and microscopic scales are then presented. The simulations depend on anatomical and physiological estimates of synaptic densities, coupling symmetries, synaptic gain, dendritic time constants, and axonal delays. It is shown that the frequency content, wave velocities, frequency/wavenumber spectra and response to cortical activation of the electrocorticogram (ECoG) can be reproduced by a “lumped” simulation treating small cortical areas as single-function units. The corresponding cellular neural network simulation has properties that include those of attractor neural networks proposed by Amit and by Parisi. Within the simulations at both scales, sharp transitions occur between low and high cell firing rates. These transitions may form a basis for neural interactions across scale. To maintain overall cortical dynamics in the normal low firing-rate range, interactions between the cortex and the subcortical systems are required to prevent runaway global excitation. Thus, the interaction of cortex and subcortex via corticostriatal and related pathways may partly regulate global dynamics by a principle analogous to adiabatic control of artificial neural networks.


Author(s):  
Michael P. Allen ◽  
Dominic J. Tildesley

This chapter concentrates on practical tips and tricks for improving the efficiency of computer simulation programs. This includes the effect of using truncated and shifted potentials, and the use of table look-up and neural networks for calculating potentials. Approaches for speeding up simulations, such as the Verlet neighbour list, linked-lists and multiple timestep methods are described. The chapter then proceeds to discuss the general structure of common simulation programs; in particular the choice of the starting configuration and the initial velocities of the particles. The chapter also contains details of the overall approach to organising runs, storing the data, and checking that the program is working correctly.


2009 ◽  
Vol 19 (1) ◽  
pp. 28-36
Author(s):  
Meredith E. Estep ◽  
Steven M. Barlow

Abstract Acknowledging the dynamical properties of neural networks allows insight into the functional segregation and integration of cerebral areas. From a theoretical viewpoint, the complexity of neuronal interactions within a distributed system may reflect its capacity to rapidly process multimodal information and modulate context-sensitive neural activity to encode perception and adaptive behavior. This article highlights recent studies aimed at understanding the functionally flexible connectivity of the orofacial substrate.


2012 ◽  
Vol 433-440 ◽  
pp. 6546-6550
Author(s):  
Jun Xu

Using the adaptive noise canceling technology, this paper proposes a new detecting approach to harmonics and reactive currents based on neural networks with changeable learning parameters. The structure of this neural network and the adaptive weights adjusting algorithm are presented. The contradiction of the detecting speed and the precision has been settled preferably. The proposed detecting approach can be used for detecting the harmonics and the reactive currents of active power filters. The results of the theoretical analysis and computer simulation confirm the validity of the approach.


2014 ◽  
Vol 26 (10) ◽  
pp. 2247-2293 ◽  
Author(s):  
Yimin Nie ◽  
Jean-Marc Fellous ◽  
Masami Tatsuno

The investigation of neural interactions is crucial for understanding information processing in the brain. Recently an analysis method based on information geometry (IG) has gained increased attention, and the property of the pairwise IG measure has been studied extensively in relation to the two-neuron interaction. However, little is known about the property of IG measures involving more neuronal interactions. In this study, we systematically investigated the influence of external inputs and the asymmetry of connections on the IG measures in cases ranging from 1-neuron to 10-neuron interactions. First, the analytical relationship between the IG measures and external inputs was derived for a network of 10 neurons with uniform connections. Our results confirmed that the single and pairwise IG measures were good estimators of the mean background input and of the sum of the connection weights, respectively. For the IG measures involving 3 to 10 neuronal interactions, we found that the influence of external inputs was highly nonlinear. Second, by computer simulation, we extended our analytical results to asymmetric connections. For a network of 10 neurons, the simulation showed that the behavior of the IG measures in relation to external inputs was similar to the analytical solution obtained for a uniformly connected network. When the network size was increased to 1000 neurons, the influence of external inputs almost disappeared. This result suggests that all IG measures from 1-neuron to 10-neuron interactions are robust against the influence of external inputs. In addition, we investigated how the strength of asymmetry influenced the IG measures. Computer simulation of a 1000-neuron network showed that all the IG measures were robust against the modulation of the asymmetry of connections. Our results provide further support for an information-geometric approach and will provide useful insights when these IG measures are applied to real experimental spike data.


2020 ◽  
pp. 313-321
Author(s):  
L. Katerynych ◽  
◽  
M. Veres ◽  
E. Safarov ◽  
◽  
...  

This study is devoted to evaluating the process of training of a parallel system in the form of an artificial neural network, which is built using a genetic algorithm. The methods that allow to achieve this goal are computer simulation of a neural network on multi-core CPUs and a genetic algorithm for finding the weights of an artificial neural network. The performance of sequential and parallel training processes of artificial neural network is compared.


2019 ◽  
Vol 49 (4) ◽  
pp. 157-186
Author(s):  
Dariusz Ampuła

Abstract The article presents the information about the usage of artificial neural networks. The automation process of neural networks of the analysed evaluation data results is highlighted. The kinds of MG type artillery fuses are described and the kinds of cartridges’ calibres, in which they are used, are also specified. The way of preparation of databases of test results to computer simulation is described. Building of neural networks determining the main technical parameters and sizes of learning, test and validation sets is characterized. The summary for chosen active neural networks for individual kinds of the analysed MG type artillery fuses is presented. Graphs of learning, values of sensibility indicators and fragments of prediction sheets for the chosen neural networks were shown.


Sign in / Sign up

Export Citation Format

Share Document