scholarly journals The structure dilemma in biological and artificial neural networks

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Thomas Pircher ◽  
Bianca Pircher ◽  
Eberhard Schlücker ◽  
Andreas Feigenspan

AbstractBrain research up to date has revealed that structure and function are highly related. Thus, for example, studies have repeatedly shown that the brains of patients suffering from schizophrenia or other diseases have a different connectome compared to healthy people. Apart from stochastic processes, however, an inherent logic describing how neurons connect to each other has not yet been identified. We revisited this structural dilemma by comparing and analyzing artificial and biological-based neural networks. Namely, we used feed-forward and recurrent artificial neural networks as well as networks based on the structure of the micro-connectome of C. elegans and of the human macro-connectome. We trained these diverse networks, which markedly differ in their architecture, initialization and pruning technique, and we found remarkable parallels between biological-based and artificial neural networks, as we were additionally able to show that the dilemma is also present in artificial neural networks. Our findings show that structure contains all the information, but that this structure is not exclusive. Indeed, the same structure was able to solve completely different problems with only minimal adjustments. We particularly put interest on the influence of weights and the neuron offset value, as they show a different adaption behaviour. Our findings open up new questions in the fields of artificial and biological information processing research.

2008 ◽  
Vol 18 (05) ◽  
pp. 389-403 ◽  
Author(s):  
THOMAS D. JORGENSEN ◽  
BARRY P. HAYNES ◽  
CHARLOTTE C. F. NORLUND

This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.


2019 ◽  
Author(s):  
Fabiane Barbosa do Nascimento ◽  
Leonardo Rocha Olivi ◽  
Luís Henrique Lopes Lima ◽  
Leonardo Willer de Oliveira ◽  
Ivo Chaves Silva Junior

2015 ◽  
Vol 756 ◽  
pp. 507-512
Author(s):  
S.N. Danilin ◽  
M.V. Makarov ◽  
S.A. Shchanikov

The article deals with the problem of calculating the fault tolerance of neural network components of industrial controlling and measuring systems used in mechanical engineering. We have formulated a general approach to developing methods for quantitative determination of the level of the fault tolerance of artificial neural networks with any structure and function. We have studied the fault tolerance of four artificial feedforward neural networks as well as the correlation between the result of determining the fault tolerance level and a selected performance parameter of artificial neural networks.


Sign in / Sign up

Export Citation Format

Share Document