A simple neural network model produces chaos similar to the human EEG

1994 ◽  
Vol 196 (3-4) ◽  
pp. 195-200 ◽  
Author(s):  
Vladimir E. Bondarenko
Author(s):  
Fergyanto E. Gunawan ◽  
Herriyandi ◽  
Benfano Soewito ◽  
Tuga Mauritsius ◽  
Nico Surantha

2018 ◽  
Author(s):  
Muktabh Mayank Srivastava

We propose a simple neural network model which can learn relation between sentences by passing their representations obtained from Long Short Term Memory(LSTM) through a Relation Network. The Relation Network module tries to extract similarity between multiple contextual representations obtained from LSTM. Our model is simple to implement, light in terms of parameters and works across multiple supervised sentence comparison tasks. We show good results for the model on two sentence comparison datasets.


1997 ◽  
Vol 07 (05) ◽  
pp. 1133-1140 ◽  
Author(s):  
Vladimir E. Bondarenko

The self-organization processes in an analog asymmetric neural network with the time delay were considered. It was shown that in dependence on the value of coupling constants between neurons the neural network produced sinusoidal, quasi-periodic or chaotic outputs. The correlation dimension, largest Lyapunov exponent, Shannon entropy and normalized Shannon entropy of the solutions were studied from the point of view of the self-organization processes in systems far from equilibrium state. The quantitative characteristics of the chaotic outputs were compared with the human EEG characteristics. The calculation of the correlation dimension ν shows that its value is varied from 1.0 in case of sinusoidal oscillations to 9.5 in chaotic case. These values of ν agree with the experimental values from 6 to 8 obtained from the human EEG. The largest Lyapunov exponent λ calculated from neural network model is in the range from -0.2 s -1 to 4.8 s -1 for the chaotic solutions. It is also in the interval from 0.028 s -1 to 2.9 s -1 of λ which is observed in experimental study of the human EEG.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Zhong Wang ◽  
Peibei Shi

In order to distinguish between computers and humans, CAPTCHA is widely used in links such as website login and registration. The traditional CAPTCHA recognition method has poor recognition ability and robustness to different types of verification codes. For this reason, the paper proposes a CAPTCHA recognition method based on convolutional neural network with focal loss function. This method improves the traditional VGG network structure and introduces the focal loss function to generate a new CAPTCHA recognition model. First, we perform preprocessing such as grayscale, binarization, denoising, segmentation, and annotation and then use the Keras library to build a simple neural network model. In addition, we build a terminal end-to-end neural network model for recognition for complex CAPTCHA with high adhesion and more interference pixel. By testing the CNKI CAPTCHA, Zhengfang CAPTCHA, and randomly generated CAPTCHA, the experimental results show that the proposed method has a better recognition effect and robustness for three different datasets, and it has certain advantages compared with traditional deep learning methods. The recognition rate is 99%, 98.5%, and 97.84%, respectively.


1999 ◽  
Vol 11 (1) ◽  
pp. 103-116 ◽  
Author(s):  
Dean V. Buonomano ◽  
Michael Merzenich

Numerous studies have suggested that the brain may encode information in the temporal firing pattern of neurons. However, little is known regarding how information may come to be temporally encoded and about the potential computational advantages of temporal coding. Here, it is shown that local inhibition may underlie the temporal encoding of spatial images. As a result of inhibition, the response of a given cell can be significantly modulated by stimulus features outside its own receptive field. Feedforward and lateral inhibition can modulate both the firing rate and temporal features, such as latency. In this article, it is shown that a simple neural network model can use local inhibition to generate temporal codes of handwritten numbers. The temporal encoding of a spatial patterns has the interesting and computationally beneficial feature of exhibiting position invariance. This work demonstrates a manner by which the nervous system may generate temporal codes and shows that temporal encoding can be used to create position-invariant codes.


2007 ◽  
Vol 340 (1-2) ◽  
pp. 1-11 ◽  
Author(s):  
N. Samani ◽  
M. Gohari-Moghadam ◽  
A.A. Safavi

2018 ◽  
Author(s):  
Muktabh Mayank Srivastava

We propose a simple neural network model which can learn relation between sentences by passing their representations obtained from Long Short Term Memory(LSTM) through a Relation Network. The Relation Network module tries to extract similarity between multiple contextual representations obtained from LSTM. Our model is simple to implement, light in terms of parameters and works across multiple supervised sentence comparison tasks. We show good results for the model on two sentence comparison datasets.


Sign in / Sign up

Export Citation Format

Share Document