Neural network model for path-finding problems with the self-recovery property

2019 ◽  
Vol 99 (3) ◽  
Author(s):  
Kei-Ichi Ueda ◽  
Keiichi Kitajo ◽  
Yoko Yamaguchi ◽  
Yasumasa Nishiura
1997 ◽  
Vol 07 (05) ◽  
pp. 1133-1140 ◽  
Author(s):  
Vladimir E. Bondarenko

The self-organization processes in an analog asymmetric neural network with the time delay were considered. It was shown that in dependence on the value of coupling constants between neurons the neural network produced sinusoidal, quasi-periodic or chaotic outputs. The correlation dimension, largest Lyapunov exponent, Shannon entropy and normalized Shannon entropy of the solutions were studied from the point of view of the self-organization processes in systems far from equilibrium state. The quantitative characteristics of the chaotic outputs were compared with the human EEG characteristics. The calculation of the correlation dimension ν shows that its value is varied from 1.0 in case of sinusoidal oscillations to 9.5 in chaotic case. These values of ν agree with the experimental values from 6 to 8 obtained from the human EEG. The largest Lyapunov exponent λ calculated from neural network model is in the range from -0.2 s -1 to 4.8 s -1 for the chaotic solutions. It is also in the interval from 0.028 s -1 to 2.9 s -1 of λ which is observed in experimental study of the human EEG.


2014 ◽  
Vol 140 (2) ◽  
pp. 05014001 ◽  
Author(s):  
Yang Gao ◽  
Zhe Feng ◽  
Yang Wang ◽  
Jin-Long Liu ◽  
Shuang-Cheng Li ◽  
...  

Perception ◽  
1997 ◽  
Vol 26 (1_suppl) ◽  
pp. 204-204 ◽  
Author(s):  
T Kohonen

We stipulate that the following three categories of dynamic phenomena must be present in a realistic neural-network model: (i) activation; (ii) adaptation; (iii) plasticity control. In most neural models only activation and adaptation are present. The self-organising map (SOM) algorithm is the only neural-network model that includes all the three phenomena. Its modelling laws include the following partial functions: (1) Some parallel computing mechanism for the specification of a cell in a piece of cell mass whose parametric representation matches or responds best to the afferent input. This cell is called the ‘winner’. (2) Control of some learning factor in the cells in the neighbourhood of the ‘winner’ so that only this neighbourhood is adapted to the current input. By virtue of the ‘neighbourhood learning,’ the SOM forms spatially ordered maps of sensory experiences, which resemble the maps observed in the brain. The newest version of the SOM is the ASSOM (adaptive-subspace SOM). The adaptive processing units of ASSOM are able to represent signal subspaces, not just templates of the original patterns. A signal subspace is an invariance group; therefore the processing units of ASSOM are able to respond invariantly, eg to moving and transforming patterns, in a similar fashion as the complex cells in the cortex.


2014 ◽  
Vol 670-671 ◽  
pp. 950-954 ◽  
Author(s):  
Ning Ding ◽  
Ding Tong Zhang ◽  
Zuo Zhen Wang

A novel and saving energy rare earth lifting permanent magnetic chuck was designed based on neural network. The working principle, the neural network model of magnetic circuit design and the self-acting driving system of rare earth lifting permanent magnetic chuck were developed. Industry prototypes were manufactured, and they verified that the proposed rare earth lifting permanent magnetic chuck was feasible.


2020 ◽  
Author(s):  
Yang Liu ◽  
Hansaim Lim ◽  
Lei Xie

AbstractMotivationDrug discovery is time-consuming and costly. Machine learning, especially deep learning, shows a great potential in accelerating the drug discovery process and reducing its cost. A big challenge in developing robust and generalizable deep learning models for drug design is the lack of a large amount of data with high quality and balanced labels. To address this challenge, we developed a self-training method PLANS that exploits millions of unlabeled chemical compounds as well as partially labeled pharmacological data to improve the performance of neural network models.ResultWe evaluated the self-training with PLANS for Cytochrome P450 binding activity prediction task, and proved that our method could significantly improve the performance of the neural network model with a large margin. Compared with the baseline deep neural network model, the PLANS-trained neural network model improved accuracy, precision, recall, and F1 score by 13.4%, 12.5%, 8.3%, and 10.3%, respectively. The self-training with PLANS is model agnostic, and can be applied to any deep learning architectures. Thus, PLANS provides a general solution to utilize unlabeled and partially labeled data to improve the predictive modeling for drug discovery.AvailabilityThe code that implements PLANS is available at https://github.com/XieResearchGroup/PLANS


Sign in / Sign up

Export Citation Format

Share Document