scholarly journals Formation of neural networks with structural and functional features consistent with small-world network topology on surface-grafted polymer particles

2019 ◽  
Vol 6 (10) ◽  
pp. 191086 ◽  
Author(s):  
Vibeke Devold Valderhaug ◽  
Wilhelm Robert Glomm ◽  
Eugenia Mariana Sandru ◽  
Masahiro Yasuda ◽  
Axel Sandvig ◽  
...  

In vitro electrophysiological investigation of neural activity at a network level holds tremendous potential for elucidating underlying features of brain function (and dysfunction). In standard neural network modelling systems, however, the fundamental three-dimensional (3D) character of the brain is a largely disregarded feature. This widely applied neuroscientific strategy affects several aspects of the structure–function relationships of the resulting networks, altering network connectivity and topology, ultimately reducing the translatability of the results obtained. As these model systems increase in popularity, it becomes imperative that they capture, as accurately as possible, fundamental features of neural networks in the brain, such as small-worldness. In this report, we combine in vitro neural cell culture with a biologically compatible scaffolding substrate, surface-grafted polymer particles (PPs), to develop neural networks with 3D topology. Furthermore, we investigate their electrophysiological network activity through the use of 3D multielectrode arrays. The resulting neural network activity shows emergent behaviour consistent with maturing neural networks capable of performing computations, i.e. activity patterns suggestive of both information segregation (desynchronized single spikes and local bursts) and information integration (network spikes). Importantly, we demonstrate that the resulting PP-structured neural networks show both structural and functional features consistent with small-world network topology.

2013 ◽  
Vol 7 (1) ◽  
pp. 49-62 ◽  
Author(s):  
Vijaykumar Sutariya ◽  
Anastasia Groshev ◽  
Prabodh Sadana ◽  
Deepak Bhatia ◽  
Yashwant Pathak

Artificial neural networks (ANNs) technology models the pattern recognition capabilities of the neural networks of the brain. Similarly to a single neuron in the brain, artificial neuron unit receives inputs from many external sources, processes them, and makes decisions. Interestingly, ANN simulates the biological nervous system and draws on analogues of adaptive biological neurons. ANNs do not require rigidly structured experimental designs and can map functions using historical or incomplete data, which makes them a powerful tool for simulation of various non-linear systems.ANNs have many applications in various fields, including engineering, psychology, medicinal chemistry and pharmaceutical research. Because of their capacity for making predictions, pattern recognition, and modeling, ANNs have been very useful in many aspects of pharmaceutical research including modeling of the brain neural network, analytical data analysis, drug modeling, protein structure and function, dosage optimization and manufacturing, pharmacokinetics and pharmacodynamics modeling, and in vitro in vivo correlations. This review discusses the applications of ANNs in drug delivery and pharmacological research.


2006 ◽  
Vol 100 (5-6) ◽  
pp. 290-296 ◽  
Author(s):  
Julien Bouvier ◽  
Sandra Autran ◽  
Gilles Fortin ◽  
Jean Champagnat ◽  
Muriel Thoby-Brisson

2010 ◽  
Vol 61 (2) ◽  
pp. 120-124 ◽  
Author(s):  
Ladislav Zjavka

Generalization of Patterns by Identification with Polynomial Neural Network Artificial neural networks (ANN) in general classify patterns according to their relationship, they are responding to related patterns with a similar output. Polynomial neural networks (PNN) are capable of organizing themselves in response to some features (relations) of the data. Polynomial neural network for dependence of variables identification (D-PNN) describes a functional dependence of input variables (not entire patterns). It approximates a hyper-surface of this function with multi-parametric particular polynomials forming its functional output as a generalization of input patterns. This new type of neural network is based on GMDH polynomial neural network and was designed by author. D-PNN operates in a way closer to the brain learning as the ANN does. The ANN is in principle a simplified form of the PNN, where the combinations of input variables are missing.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Alexander P. Christensen ◽  

The nature of associations between variables is important for constructing theory about psychological phenomena. In the last decade, this topic has received renewed interest with the introduction of psychometric network models. In psychology, network models are often contrasted with latent variable (e.g., factor) models. Recent research has shown that differences between the two tend to be more substantive than statistical. One recently developed algorithm called the Loadings Comparison Test (LCT) was developed to predict whether data were generated from a factor or small-world network model. A significant limitation of the current LCT implementation is that it's based on heuristics that were derived from descriptive statistics. In the present study, we used artificial neural networks to replace these heuristics and develop a more robust and generalizable algorithm. We performed a Monte Carlo simulation study that compared neural networks to the original LCT algorithm as well as logistic regression models that were trained on the same data. We found that the neural networks performed as well as or better than both methods for predicting whether data were generated from a factor, small-world network, or random network model. Although the neural networks were trained on small-world networks, we show that they can reliably predict the data-generating model of random networks, demonstrating generalizability beyond the trained data. We echo the call for more formal theories about the relations between variables and discuss the role of the LCT in this process.


Author(s):  
Thomas P. Trappenberg

This chapter discusses the basic operation of an artificial neural network which is the major paradigm of deep learning. The name derives from an analogy to a biological brain. The discussion begins by outlining the basic operations of neurons in the brain and how these operations are abstracted by simple neuron models. It then builds networks of artificial neurons that constitute much of the recent success of AI. The focus of this chapter is on using such techniques, with subsequent consideration of their theoretical embedding.


2012 ◽  
Vol 263-266 ◽  
pp. 3374-3377
Author(s):  
Hua Liang Wu ◽  
Zhen Dong Mu ◽  
Jian Feng Hu

In the application of the classification, neural networks are often used as a classification tool, In this paper, neural network is introduced on motor imagery EEG analysis, the first EEG Hjort conversion, and then the brain electrical signal is converted into the frequency domain, Finally, the fisher distance for feature extraction in the EEG analysis, identification of the study sample was 97 86% recognition rate is 80% of the test sample.


Sign in / Sign up

Export Citation Format

Share Document