Sequential learning neural network and its application in agriculture

Author(s):  
Chao Deng ◽  
Fanlun Xiong ◽  
Ying Tan ◽  
Zhenya He
Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 222
Author(s):  
Baigan Zhao ◽  
Yingping Huang ◽  
Hongjian Wei ◽  
Xing Hu

Visual odometry (VO) refers to incremental estimation of the motion state of an agent (e.g., vehicle and robot) by using image information, and is a key component of modern localization and navigation systems. Addressing the monocular VO problem, this paper presents a novel end-to-end network for estimation of camera ego-motion. The network learns the latent subspace of optical flow (OF) and models sequential dynamics so that the motion estimation is constrained by the relations between sequential images. We compute the OF field of consecutive images and extract the latent OF representation in a self-encoding manner. A Recurrent Neural Network is then followed to examine the OF changes, i.e., to conduct sequential learning. The extracted sequential OF subspace is used to compute the regression of the 6-dimensional pose vector. We derive three models with different network structures and different training schemes: LS-CNN-VO, LS-AE-VO, and LS-RCNN-VO. Particularly, we separately train the encoder in an unsupervised manner. By this means, we avoid non-convergence during the training of the whole network and allow more generalized and effective feature representation. Substantial experiments have been conducted on KITTI and Malaga datasets, and the results demonstrate that our LS-RCNN-VO outperforms the existing learning-based VO approaches.


Author(s):  
Anthony Robins ◽  
◽  
Marcus Frean ◽  

In this paper, we explore the concept of sequential learning and the efficacy of global and local neural network learning algorithms on a sequential learning task. Pseudorehearsal, a method developed by Robins19) to solve the catastrophic forgetting problem which arises from the excessive plasticity of neural networks, is significantly more effective than other local learning algorithms for the sequential task. We further consider the concept of local learning and suggest that pseudorehearsal is so effective because it works directly at the level of the learned function, and not indirectly on the representation of the function within the network. We also briefly explore the effect of local learning on generalization within the task.


In this world, earthquake is a major catastrophe which creates huge amount of loss in living non living things. The prediction of an earthquake is an important task in seismology. Neural network performs a key task in the prediction of earthquake. The neural network architecture are created with different input layer and hidden layers with deep learning optimization algorithms. The input layer was developed with the parameters of historical earthquake data of India taken from India Meteorological Department (IMD). The earthquake event such as date, latitude, longitude, depth, magnitude are mathematically converted into seismic indicators depend on Gutenberg-Richter’s inverse law, are the input layers of this neural network model. The developed network model was trained with set of data items using neural network algorithms such as Backpropagation and sequential learning. The Backpropagation is used to find the magnitude prediction and sequential learning is used to find the prediction model for the cartographic risky areas. The loss and accuracy of the model are analyzed with the help of software tool, Disaster Management System which is developed for this work using Python. The deep neural network optimizers such as Stochastic Gradient Descent (SGD), Adaptive Gradient algorithm (AdaGrad) and Root Mean Square propagation (RMSprop) are used to optimize the prediction model. The optimizer produced earthquake prediction model with high ability and more accuracy. Also give the cartography which shows the seismic zone in India face earthquake in future


2003 ◽  
Vol 13 (05) ◽  
pp. 333-351 ◽  
Author(s):  
DI WANG ◽  
NARENDRA S. CHAUDHARI

A key problem in Binary Neural Network learning is to decide bigger linear separable subsets. In this paper we prove some lemmas about linear separability. Based on these lemmas, we propose Multi-Core Learning (MCL) and Multi-Core Expand-and-Truncate Learning (MCETL) algorithms to construct Binary Neural Networks. We conclude that MCL and MCETL simplify the equations to compute weights and thresholds, and they result in the construction of simpler hidden layer. Examples are given to demonstrate these conclusions.


Sign in / Sign up

Export Citation Format

Share Document