scholarly journals From neural network to psychophysics of time: Exploring emergent properties of RNNs using novel Hamiltonian formalism

2017 ◽  
Author(s):  
Rakesh Sengupta ◽  
Anindya Pattanayak ◽  
Raju Surampudi Bapi

AbstractThe stability analysis of dynamical neural network systems generally follows the route of finding a suitable Liapunov function after the fashion Hopfield’s famous paper on content addressable memory network or by finding conditions that make divergent solutions impossible. For the current work we focused on biological recurrent neural networks (bRNNs) that require transient external inputs (Cohen-Grossberg networks). In the current work we have proposed a general method to construct Liapunov functions for recurrent neural network with the help of a physically meaningful Hamiltonian function. This construct allows us to explore the emergent properties of the recurrent network (e.g., parameter configuration needed for winner-take-all competition in a leaky accumulator design) beyond that available in standard stability analysis, while also comparing well with standard stability analysis (ordinary differential equation approach) as a special case of the general stability constraint derived from the Hamiltonian formulation. We also show that the Cohen-Grossberg Liapunov function can be derived naturally from the Hamiltonian formalism. A strength of the construct comes from its usability as a predictor for behavior in psychophysical experiments involving numerosity and temporal duration judgements.

2009 ◽  
Vol 19 (05) ◽  
pp. 375-386 ◽  
Author(s):  
YONG ZHAO ◽  
YONGHUI XIA ◽  
QISHAO LU

Based on the inequality analysis, matrix theory and spectral theory, a class of general periodic neural networks with delays and impulses is studied. Some sufficient conditions are established for the existence and globally exponential stability of a unique periodic solution. Furthermore, the results are applied to some typical impulsive neural network systems as special cases, with a real-life example to show feasibility of our results.


2021 ◽  
Vol 21 (4) ◽  
pp. 3-14
Author(s):  
Trayan Stamov

Abstract In recent years, we are witnessing artificial intelligence being deployed on embedded platforms in our everyday life, including engineering design practice problems starting from early stage design ideas to the final decision. One of the most challenging problems is related to the design and implementation of neural networks in engineering design tasks. The successful design and practical applications of neural network models depend on their qualitative properties. Elaborating efficient stability is known to be of a high importance. Also, different stability notions are applied for differently behaving models. In addition, uncertainties are ubiquitous in neural network systems, and may result in performance degradation, hazards or system damage. Driven by practical needs and theoretical challenges, the rigorous handling of uncertainties in the neural network design stage is an essential research topic. In this research, the concept of robust practical stability is introduced for generalized discrete neural network models under uncertainties applied in engineering design. A robust practical stability analysis is offered using the Lyapunov function method. Since practical stability concept is more appropriate for engineering applications, the obtained results can be of a practical significance to numerous engineering design problems of diverse interest.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Luzhe Huang ◽  
Hanlong Chen ◽  
Yilin Luo ◽  
Yair Rivenson ◽  
Aydogan Ozcan

AbstractVolumetric imaging of samples using fluorescence microscopy plays an important role in various fields including physical, medical and life sciences. Here we report a deep learning-based volumetric image inference framework that uses 2D images that are sparsely captured by a standard wide-field fluorescence microscope at arbitrary axial positions within the sample volume. Through a recurrent convolutional neural network, which we term as Recurrent-MZ, 2D fluorescence information from a few axial planes within the sample is explicitly incorporated to digitally reconstruct the sample volume over an extended depth-of-field. Using experiments on C. elegans and nanobead samples, Recurrent-MZ is demonstrated to significantly increase the depth-of-field of a 63×/1.4NA objective lens, also providing a 30-fold reduction in the number of axial scans required to image the same sample volume. We further illustrated the generalization of this recurrent network for 3D imaging by showing its resilience to varying imaging conditions, including e.g., different sequences of input images, covering various axial permutations and unknown axial positioning errors. We also demonstrated wide-field to confocal cross-modality image transformations using Recurrent-MZ framework and performed 3D image reconstruction of a sample using a few wide-field 2D fluorescence images as input, matching confocal microscopy images of the same sample volume. Recurrent-MZ demonstrates the first application of recurrent neural networks in microscopic image reconstruction and provides a flexible and rapid volumetric imaging framework, overcoming the limitations of current 3D scanning microscopy tools.


1996 ◽  
Author(s):  
Edward C. Uberbacher ◽  
Y. Xu ◽  
R. W. Lee ◽  
Charles W. Glover ◽  
Martin Beckerman ◽  
...  

2013 ◽  
Vol 116 ◽  
pp. 22-29 ◽  
Author(s):  
Didi Wang ◽  
Pei-Chann Chang ◽  
Li Zhang ◽  
Jheng-Long Wu ◽  
Changle Zhou

Sign in / Sign up

Export Citation Format

Share Document