scholarly journals ARTFLOW: A Fast, Biologically Inspired Neural Network That Learns Optic Flow Templates for Self-Motion Estimation

Sensors ◽  
2021 ◽  
Vol 21 (24) ◽  
pp. 8217
Author(s):  
Oliver W. Layton

Most algorithms for steering, obstacle avoidance, and moving object detection rely on accurate self-motion estimation, a problem animals solve in real time as they navigate through diverse environments. One biological solution leverages optic flow, the changing pattern of motion experienced on the eye during self-motion. Here I present ARTFLOW, a biologically inspired neural network that learns patterns in optic flow to encode the observer’s self-motion. The network combines the fuzzy ART unsupervised learning algorithm with a hierarchical architecture based on the primate visual system. This design affords fast, local feature learning across parallel modules in each network layer. Simulations show that the network is capable of learning stable patterns from optic flow simulating self-motion through environments of varying complexity with only one epoch of training. ARTFLOW trains substantially faster and yields self-motion estimates that are far more accurate than a comparable network that relies on Hebbian learning. I show how ARTFLOW serves as a generative model to predict the optic flow that corresponds to neural activations distributed across the network.

Author(s):  
Baiyu Peng ◽  
Qi Sun ◽  
Shengbo Eben Li ◽  
Dongsuk Kum ◽  
Yuming Yin ◽  
...  

AbstractRecent years have seen the rapid development of autonomous driving systems, which are typically designed in a hierarchical architecture or an end-to-end architecture. The hierarchical architecture is always complicated and hard to design, while the end-to-end architecture is more promising due to its simple structure. This paper puts forward an end-to-end autonomous driving method through a deep reinforcement learning algorithm Dueling Double Deep Q-Network, making it possible for the vehicle to learn end-to-end driving by itself. This paper firstly proposes an architecture for the end-to-end lane-keeping task. Unlike the traditional image-only state space, the presented state space is composed of both camera images and vehicle motion information. Then corresponding dueling neural network structure is introduced, which reduces the variance and improves sampling efficiency. Thirdly, the proposed method is applied to The Open Racing Car Simulator (TORCS) to demonstrate its great performance, where it surpasses human drivers. Finally, the saliency map of the neural network is visualized, which indicates the trained network drives by observing the lane lines. A video for the presented work is available online, https://youtu.be/76ciJmIHMD8 or https://v.youku.com/v_show/id_XNDM4ODc0MTM4NA==.html.


2020 ◽  
Vol 12 (2) ◽  
pp. 1-20
Author(s):  
Sourav Das ◽  
Anup Kumar Kolya

In this work, the authors extract information on distinct baseline features from a popular open-source music corpus and explore new recognition techniques by applying unsupervised Hebbian learning techniques on our single-layer neural network using the same dataset. They show the detailed empirical findings to simulate how such an algorithm can help a single layer feedforward network in training for music feature learning as patterns. The unsupervised training algorithm enhances the proposed neural network to achieve an accuracy of 90.36% for successful music feature detection. For comparative analysis against similar tasks, they put their results with the likes of several previous benchmark works. They further discuss the limitations and thorough error analysis of the work. They hope to discover and gather new information about this particular classification technique and performance, also further understand future potential directions that could improve the art of computational music feature recognition.


2015 ◽  
Vol 2015 ◽  
pp. 1-7 ◽  
Author(s):  
Lin Li ◽  
Shengsheng Yu ◽  
Luo Zhong ◽  
Xiaozhen Li

Multilingual text detection in natural scenes is still a challenging task in computer vision. In this paper, we apply an unsupervised learning algorithm to learn language-independent stroke feature and combine unsupervised stroke feature learning and automatically multilayer feature extraction to improve the representational power of text feature. We also develop a novel nonlinear network based on traditional Convolutional Neural Network that is able to detect multilingual text regions in the images. The proposed method is evaluated on standard benchmarks and multilingual dataset and demonstrates improvement over the previous work.


2021 ◽  
Author(s):  
Shubhangi Pande ◽  
Neeraj Kumar Rathore ◽  
Anuradha Purohit

Abstract Machine learning applications employ FFNN (Feed Forward Neural Network) in their discipline enormously. But, it has been observed that the FFNN requisite speed is not up the mark. The fundamental causes of this problem are: 1) for training neural networks, slow gradient descent methods are broadly used and 2) for such methods, there is a need for iteratively tuning hidden layer parameters including biases and weights. To resolve these problems, a new emanant machine learning algorithm, which is a substitution of the feed-forward neural network, entitled as Extreme Learning Machine (ELM) introduced in this paper. ELM also come up with a general learning scheme for the immense diversity of different networks (SLFNs and multilayer networks). According to ELM originators, the learning capacity of networks trained using backpropagation is a thousand times slower than the networks trained using ELM, along with this, ELM models exhibit good generalization performance. ELM is more efficient in contradiction of Least Square Support Vector Machine (LS-SVM), Support Vector Machine (SVM), and rest of the precocious approaches. ELM’s eccentric outline has three main targets: 1) high learning accuracy 2) less human intervention 3) fast learning speed. ELM consider as a greater capacity to achieve global optimum. The distribution of application of ELM incorporates: feature learning, clustering, regression, compression, and classification. With this paper, our goal is to familiarize various ELM variants, their applications, ELM strengths, ELM researches and comparison with other learning algorithms, and many more concepts related to ELM.


2021 ◽  
Author(s):  
Shubhangi Pande ◽  
Neeraj Rathore ◽  
Anuradha Purohit

Abstract Machine learning applications employ FFNN (Feed Forward Neural Network) in their discipline enormously. But, it has been observed that the FFNN requisite speed is not up the mark. The fundamental causes of this problem are: 1) for training neural networks, slow gradient descent methods are broadly used and 2) for such methods, there is a need for iteratively tuning hidden layer parameters including biases and weights. To resolve these problems, a new emanant machine learning algorithm, which is a substitution of the feed-forward neural network, entitled as Extreme Learning Machine (ELM) introduced in this paper. ELM also come up with a general learning scheme for the immense diversity of different networks (SLFNs and multilayer networks). According to ELM originators, the learning capacity of networks trained using backpropagation is a thousand times slower than the networks trained using ELM, along with this, ELM models exhibit good generalization performance. ELM is more efficient in contradiction of Least Square Support Vector Machine (LS-SVM), Support Vector Machine (SVM), and rest of the precocious approaches. ELM’s eccentric outline has three main targets: 1) high learning accuracy 2) less human intervention 3) fast learning speed. ELM consider as a greater capacity to achieve global optimum. The distribution of application of ELM incorporates: feature learning, clustering, regression, compression, and classification. With this paper, our goal is to familiarize various ELM variants, their applications, ELM strengths, ELM researches and comparison with other learning algorithms, and many more concepts related to ELM.


2001 ◽  
Vol 6 (2) ◽  
pp. 129-136 ◽  
Author(s):  
Jiyang Dong ◽  
Shenchu Xu ◽  
Zhenxiang Chen ◽  
Boxi Wu

Discrete Hopfield neural network (DHNN) is studied by performing permutation operations on the synaptic weight matrix. The storable patterns set stored with Hebbian learning algorithm in a network without losing memories is studied, and a condition which makes sure all the patterns of the storable patterns set have a same basin size of attraction is proposed. Then, the permutation symmetries of the network are studied associating with the stored patterns set. A construction of the storable patterns set satisfying that condition is achieved by consideration of their invariance under a point group.


2016 ◽  
Vol 10 (1) ◽  
pp. 54-69
Author(s):  
Jui-Lin Lai ◽  
Chung-Yu Wu

The paper is proposed the Ratio-Memory Cellular Neural Network (RMCNN) that structure with the self-feedback and the modified Hebbian learning algorithm. The learnable RMCNN architecture was designed and realized in CMOS technology for associative memory neural network applications. The exemplar patterns can be learned and correctly recognized the output patterns for the proposed system. Only self-output pixel value in A template and B template weights are updated by the nearest neighboring five elements for all test input exemplar patterns. The learned ratio weights of the B template are generated that the catch weights are performed the summation of absolute coefficients operation to enhance the feature of recognized pattern. Simulation results express that the system can be learned some exemplar patterns with noise and recognized the correctly pattern. The 9×9 RMCNN structure with self-feedback and the modified Hebbian learning algorithm is implemented and verified in the CMOS circuits for TSMC 0.25 µm 1P5M VLSI technology. The proposed RMCNN have more learning and recognition capability for the variant exemplar patterns in the auto-associative memory neural system applications.


2020 ◽  
Vol 20 (11) ◽  
pp. 1212
Author(s):  
Scott Steinmetz ◽  
Oliver Layton ◽  
Nathaniel Powell ◽  
Brett Fajen

Sign in / Sign up

Export Citation Format

Share Document