scholarly journals A model biological neural network: the cephalopod vestibular system

2007 ◽  
Vol 362 (1479) ◽  
pp. 473-481 ◽  
Author(s):  
Roddy Williamson ◽  
Abdul Chrachri

Artificial neural networks (ANNs) have become increasingly sophisticated and are widely used for the extraction of patterns or meaning from complicated or imprecise datasets. At the same time, our knowledge of the biological systems that inspired these ANNs has also progressed and a range of model systems are emerging where there is detailed information not only on the architecture and components of the system but also on their ontogeny, plasticity and the adaptive characteristics of their interconnections. We describe here a biological neural network contained in the cephalopod statocysts; the statocysts are analogous to the vertebrae vestibular system and provide the animal with sensory information on its orientation and movements in space. The statocyst network comprises only a small number of cells, made up of just three classes of neurons but, in combination with the large efferent innervation from the brain, forms an ‘active’ sense organs that uses feedback and feed-forward mechanisms to alter and dynamically modulate the activity within cells and how the various components are interconnected. The neurons are fully accessible to physiological investigation and the system provides an excellent model for describing the mechanisms underlying the operation of a sophisticated neural network.

2020 ◽  
Vol 2 (3(September-December)) ◽  
pp. e642020
Author(s):  
Ricardo Santos De Oliveira

The human brain contains around 86 billion nerve cells and about as many glial cells [1]. In addition, there are about 100 trillion connections between the nerve cells alone. While mapping all the connections of a human brain remains out of reach, scientists have started to address the problem on a smaller scale. The term artificial neural networks (ANNs or simply neural networks (NNs), encompassing a family of nonlinear computational methods that, at least in the early stage of their development, were inspired by the functioning of the human brain. Indeed, the first ANNs were nothing more than integrated circuits devised to reproduce and understand the transmission of nerve stimuli and signals in the human central nervous system [2]. The correct way of doing it is to the first study human behavior. The human brain has a biological neural network that has billions of interconnections. As the brain learns, these connections are either formed, changed or removed, similar to how an artificial neural network adjusts its weights to account for a new training example. This complexity is the reason why it is said that practice makes one perfect since a greater number of learning instances allow the biological neural network to become better at whatever it is doing. Depending upon the stimulus, only a certain subset of neurons are activated in the nervous system. Recently, Moreau et al., [3] published an interesting paper studying how artificial intelligence can help doctors and patients with meningiomas make better treatment decisions, according to a new study. They demonstrated that their models were capable of predicting meaningful individual-specific clinical outcome variables and show good generalizability across the Surveillance, Epidemiology, and End Results (SEER) database to predict meningioma malignancy and survival after specific treatments. Statistical learning models were trained and validated on 62,844 patients from the SEER database and a model scoring for the malignancy model was performed using a series of metrics. A free smartphone and web application were also provided for readers to access and test the predictive models (www.meningioma.app). The use of artificial intelligence techniques is gradually bringing efficient theoretical solutions to a large number of real-world clinical problems related to the brain (4). Specifically, recently, thanks to the accumulation of relevant data and the development of increasingly effective algorithms, it has been possible to significantly increase the understanding of complex brain mechanisms. The researchers' efforts are creating increasingly sophisticated and interpretable algorithms, which could favor a more intensive use of “intelligent” technologies in practical clinical contexts. Brain and machine working together will improve the power of these methods to make individual-patient predictions could lead to improved diagnosis, patient counseling, and outcomes.


2010 ◽  
Vol 61 (2) ◽  
pp. 120-124 ◽  
Author(s):  
Ladislav Zjavka

Generalization of Patterns by Identification with Polynomial Neural Network Artificial neural networks (ANN) in general classify patterns according to their relationship, they are responding to related patterns with a similar output. Polynomial neural networks (PNN) are capable of organizing themselves in response to some features (relations) of the data. Polynomial neural network for dependence of variables identification (D-PNN) describes a functional dependence of input variables (not entire patterns). It approximates a hyper-surface of this function with multi-parametric particular polynomials forming its functional output as a generalization of input patterns. This new type of neural network is based on GMDH polynomial neural network and was designed by author. D-PNN operates in a way closer to the brain learning as the ANN does. The ANN is in principle a simplified form of the PNN, where the combinations of input variables are missing.


2013 ◽  
Vol 7 (1) ◽  
pp. 49-62 ◽  
Author(s):  
Vijaykumar Sutariya ◽  
Anastasia Groshev ◽  
Prabodh Sadana ◽  
Deepak Bhatia ◽  
Yashwant Pathak

Artificial neural networks (ANNs) technology models the pattern recognition capabilities of the neural networks of the brain. Similarly to a single neuron in the brain, artificial neuron unit receives inputs from many external sources, processes them, and makes decisions. Interestingly, ANN simulates the biological nervous system and draws on analogues of adaptive biological neurons. ANNs do not require rigidly structured experimental designs and can map functions using historical or incomplete data, which makes them a powerful tool for simulation of various non-linear systems.ANNs have many applications in various fields, including engineering, psychology, medicinal chemistry and pharmaceutical research. Because of their capacity for making predictions, pattern recognition, and modeling, ANNs have been very useful in many aspects of pharmaceutical research including modeling of the brain neural network, analytical data analysis, drug modeling, protein structure and function, dosage optimization and manufacturing, pharmacokinetics and pharmacodynamics modeling, and in vitro in vivo correlations. This review discusses the applications of ANNs in drug delivery and pharmacological research.


Author(s):  
Thomas P. Trappenberg

This chapter discusses the basic operation of an artificial neural network which is the major paradigm of deep learning. The name derives from an analogy to a biological brain. The discussion begins by outlining the basic operations of neurons in the brain and how these operations are abstracted by simple neuron models. It then builds networks of artificial neurons that constitute much of the recent success of AI. The focus of this chapter is on using such techniques, with subsequent consideration of their theoretical embedding.


2012 ◽  
Vol 263-266 ◽  
pp. 3374-3377
Author(s):  
Hua Liang Wu ◽  
Zhen Dong Mu ◽  
Jian Feng Hu

In the application of the classification, neural networks are often used as a classification tool, In this paper, neural network is introduced on motor imagery EEG analysis, the first EEG Hjort conversion, and then the brain electrical signal is converted into the frequency domain, Finally, the fisher distance for feature extraction in the EEG analysis, identification of the study sample was 97 86% recognition rate is 80% of the test sample.


2021 ◽  
Vol 12 (3) ◽  
pp. 255-269
Author(s):  
Kateryna Kruty ◽  
Tetiana Bohdan ◽  
Marharyta Kozyr ◽  
Oleksandra Sviontyk ◽  
Tetiana Shvaliuk ◽  
...  

Language as a means of implementing the speech process is an independent system with its own structure. In the context of our research, the concept of M.I. Zhynkin (1958) on the grid distribution of information in the grammatical space, which explains the mechanism of perception and awareness of speech. It is important for us to conclude that the sooner a direct connection is formed between the conceptual system and the basal ganglia, the better the child's awareness, assimilation and use of grammatical categories. To organize the normal functioning of speech requires a complex coordinated work of millions of neural elements of the brain, which are included in its various parts. It is proved that after ten years the ability to develop neural networks necessary for the construction of speech centers it disappears. The problem of forming grammatically correct speech in preschool children can be solved quickly and efficiently if you intensify the interaction of different analyzers. It is proved that the sensory information complex consists of auditory, visual and tactile images, which, complementing, amplifying each other, increase the number of useful signals, expand the speech space, which, in turn, limits the choice of adequate speech pattern during acquisition, perception and oral awareness. Children's learning of the elements of the grammatical system of language is influenced by two main factors, namely: the dependence on the simplicity or complexity of the language phenomenon and the degree of its communicative significance. The formation of grammatically correct speech (morphology, word formation, syntax) is based on a certain cognitive development of the child.


2021 ◽  
Vol 3 ◽  
Author(s):  
A.V. Medievsky ◽  
◽  
A.G. Zotin ◽  
K.V. Simonov ◽  
A.S. Kruglyakov

The study of the principles of formation and development of the structure of the brain is necessary to replenish fundamental knowledge both in the field of neurophysiology and in medicine. A detailed description of all the features of the brain will allow you to choose the most effective therapy method, or check the effectiveness of the drugs being developed. The basis for creating a model of a biological neural network is a map of nerve cells and their connections. To obtain it, it is necessary to carry out microscopy of the cell culture. This will produce a low-contrast image. The study of these images is a difficult task therefore a computational method for processing images based on the Shearlet transform algorithm with contrast using color coding has been developed, designed to improve the process of creating a neural network model. To assess the functional characteristics of each cell a modified version of the MEA method is proposed. The new version will have movable microelectrodes capable of homing to the desired coordinates in accordance with the data from the analyzed microscopic images and interacting with a specific neuron. The contact of a microelectrode with a single cell allows one to study its individual adhesions with minimal noise from the excitation of neighboring cells.


2020 ◽  
Vol 117 (47) ◽  
pp. 29872-29882
Author(s):  
Ben Tsuda ◽  
Kay M. Tye ◽  
Hava T. Siegelmann ◽  
Terrence J. Sejnowski

The prefrontal cortex encodes and stores numerous, often disparate, schemas and flexibly switches between them. Recent research on artificial neural networks trained by reinforcement learning has made it possible to model fundamental processes underlying schema encoding and storage. Yet how the brain is able to create new schemas while preserving and utilizing old schemas remains unclear. Here we propose a simple neural network framework that incorporates hierarchical gating to model the prefrontal cortex’s ability to flexibly encode and use multiple disparate schemas. We show how gating naturally leads to transfer learning and robust memory savings. We then show how neuropsychological impairments observed in patients with prefrontal damage are mimicked by lesions of our network. Our architecture, which we call DynaMoE, provides a fundamental framework for how the prefrontal cortex may handle the abundance of schemas necessary to navigate the real world.


1999 ◽  
Vol 10 (05) ◽  
pp. 815-821 ◽  
Author(s):  
DANIEL VOLK

A discrete model of a neural network of excitatory and inhibitory neurons is presented which yields oscillations of its global activity. Different types of dynamics occur depending on the selection of parameters: oscillating population activity as well as randomly fluctuating but mainly constant activity. For certain sets of parameters the model also shows temporary transitions from apparently random to periodic behavior in one run, similar to an epileptic seizure.


2020 ◽  
Vol 16 (11) ◽  
pp. e1008342
Author(s):  
Zhewei Zhang ◽  
Huzi Cheng ◽  
Tianming Yang

The brain makes flexible and adaptive responses in a complicated and ever-changing environment for an organism’s survival. To achieve this, the brain needs to understand the contingencies between its sensory inputs, actions, and rewards. This is analogous to the statistical inference that has been extensively studied in the natural language processing field, where recent developments of recurrent neural networks have found many successes. We wonder whether these neural networks, the gated recurrent unit (GRU) networks in particular, reflect how the brain solves the contingency problem. Therefore, we build a GRU network framework inspired by the statistical learning approach of NLP and test it with four exemplar behavior tasks previously used in empirical studies. The network models are trained to predict future events based on past events, both comprising sensory, action, and reward events. We show the networks can successfully reproduce animal and human behavior. The networks generalize the training, perform Bayesian inference in novel conditions, and adapt their choices when event contingencies vary. Importantly, units in the network encode task variables and exhibit activity patterns that match previous neurophysiology findings. Our results suggest that the neural network approach based on statistical sequence learning may reflect the brain’s computational principle underlying flexible and adaptive behaviors and serve as a useful approach to understand the brain.


Sign in / Sign up

Export Citation Format

Share Document