scholarly journals Dendritic normalisation improves learning in sparsely connected artificial neural networks

2021 ◽  
Vol 17 (8) ◽  
pp. e1009202
Author(s):  
Alex D. Bird ◽  
Peter Jedlicka ◽  
Hermann Cuntz

Artificial neural networks, taking inspiration from biological neurons, have become an invaluable tool for machine learning applications. Recent studies have developed techniques to effectively tune the connectivity of sparsely-connected artificial neural networks, which have the potential to be more computationally efficient than their fully-connected counterparts and more closely resemble the architectures of biological systems. We here present a normalisation, based on the biophysical behaviour of neuronal dendrites receiving distributed synaptic inputs, that divides the weight of an artificial neuron’s afferent contacts by their number. We apply this dendritic normalisation to various sparsely-connected feedforward network architectures, as well as simple recurrent and self-organised networks with spatially extended units. The learning performance is significantly increased, providing an improvement over other widely-used normalisations in sparse networks. The results are two-fold, being both a practical advance in machine learning and an insight into how the structure of neuronal dendritic arbours may contribute to computation.

2020 ◽  
Author(s):  
Alex D Bird ◽  
Hermann Cuntz

AbstractInspired by the physiology of neuronal systems in the brain, artificial neural networks have become an invaluable tool for machine learning applications. However, their biological realism and theoretical tractability are limited, resulting in poorly understood parameters. We have recently shown that biological neuronal firing rates in response to distributed inputs are largely independent of size, meaning that neurons are typically responsive to the proportion, not the absolute number, of their inputs that are active. Here we introduce such a normalisation, where the strength of a neuron’s afferents is divided by their number, to various sparsely-connected artificial networks. The learning performance is dramatically increased, providing an improvement over other widely-used normalisations in sparse networks. The resulting machine learning tools are universally applicable and biologically inspired, rendering them better understood and more stable in our tests.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1654
Author(s):  
Poojitha Vurtur Badarinath ◽  
Maria Chierichetti ◽  
Fatemeh Davoudi Kakhki

Current maintenance intervals of mechanical systems are scheduled a priori based on the life of the system, resulting in expensive maintenance scheduling, and often undermining the safety of passengers. Going forward, the actual usage of a vehicle will be used to predict stresses in its structure, and therefore, to define a specific maintenance scheduling. Machine learning (ML) algorithms can be used to map a reduced set of data coming from real-time measurements of a structure into a detailed/high-fidelity finite element analysis (FEA) model of the same system. As a result, the FEA-based ML approach will directly estimate the stress distribution over the entire system during operations, thus improving the ability to define ad-hoc, safe, and efficient maintenance procedures. The paper initially presents a review of the current state-of-the-art of ML methods applied to finite elements. A surrogate finite element approach based on ML algorithms is also proposed to estimate the time-varying response of a one-dimensional beam. Several ML regression models, such as decision trees and artificial neural networks, have been developed, and their performance is compared for direct estimation of the stress distribution over a beam structure. The surrogate finite element models based on ML algorithms are able to estimate the response of the beam accurately, with artificial neural networks providing more accurate results.


Author(s):  
Odysseas Kontovourkis ◽  
Marios C. Phocas ◽  
Ifigenia Lamprou

AbstractNowadays, on the basis of significant work carried out, architectural adaption structures are considered to be intelligent entities, able to react to various internal or external influences. Their adaptive behavior can be examined in a digital or physical environment, generating a variety of alternative solutions or structural transformations. These are controlled through different computational approaches, ranging from interactive exploration ones, producing alternative emergent results, to automate optimization ones, resulting in acceptable fitting solutions. This paper examines the adaptive behavior of a kinetic structure, aiming to explore suitable solutions resulting in final appropriate shapes during the transformation process. A machine learning methodology that implements an artificial neural networks algorithm is integrated to the suggested structure. The latter is formed by units articulated together in a sequential composition consisting of primary soft mechanisms and secondary rigid components that are responsible for its reconfiguration and stiffness. A number of case studies that respond to unstructured environments are set as examples, to test the effectiveness of the proposed methodology to be used for handling a large number of input data and to optimize the complex and nonlinear transformation behavior of the kinetic system at the global level, as a result of the units’ local activation that influences nearby units in a chaotic and unpredictable manner.


Sign in / Sign up

Export Citation Format

Share Document