Neural Control System for Autonomous Vehicles

Author(s):  
Francisco García-Córdova ◽  
Antonio Guerrero-González ◽  
Fulgencio Marín-García

Neural networks have been used in a number of robotic applications (Das & Kar, 2006; Fierro & Lewis, 1998), including both manipulators and mobile robots. A typical approach is to use neural networks for nonlinear system modelling, including for instance the learning of forward and inverse models of a plant, noise cancellation, and other forms of nonlinear control (Fierro & Lewis, 1998). An alternative approach is to solve a particular problem by designing a specialized neural network architecture and/or learning rule (Sutton & Barto, 1981). It is clear that biological brains, though exhibiting a certain degree of homogeneity, rely on many specialized circuits designed to solve particular problems. We are interested in understanding how animals are able to solve complex problems such as learning to navigate in an unknown environment, with the aim of applying what is learned of biology to the control of robots (Chang & Gaudiano, 1998; Martínez-Marín, 2007; Montes-González, Santos-Reyes & Ríos- Figueroa, 2006). In particular, this article presents a neural architecture that makes possible the integration of a kinematical adaptive neuro-controller for trajectory tracking and an obstacle avoidance adaptive neuro-controller for nonholonomic mobile robots. The kinematical adaptive neuro-controller is a real-time, unsupervised neural network that learns to control a nonholonomic mobile robot in a nonstationary environment, which is termed Self-Organization Direction Mapping Network (SODMN), and combines associative learning and Vector Associative Map (VAM) learning to generate transformations between spatial and velocity coordinates (García-Córdova, Guerrero-González & García-Marín, 2007). The transformations are learned in an unsupervised training phase, during which the robot moves as a result of randomly selected wheel velocities. The obstacle avoidance adaptive neurocontroller is a neural network that learns to control avoidance behaviours in a mobile robot based on a form of animal learning known as operant conditioning. Learning, which requires no supervision, takes place as the robot moves around a cluttered environment with obstacles. The neural network requires no knowledge of the geometry of the robot or of the quality, number, or configuration of the robot’s sensors. The efficacy of the proposed neural architecture is tested experimentally by a differentially driven mobile robot.

Author(s):  
Yi Liang ◽  
Ho-Hoon Lee

In this study, a decoupled controller, consisting of a force controller and a torque controller, is designed to achieve a smooth translational and rotational motion control of a group of nonholonomic mobile robots. The proposed controller also solves the problem of obstacle avoidance, where obstacles with arbitrary boundary shapes are taken into account. Since the tangential direction of obstacle boundary is adopted as the guiding direction of a robot, the proposed controller allows a mobile robot to escape from a concave obstacle, while the robot could be trapped with most of the conventional obstacle avoidance algorithms.


2021 ◽  
Vol 2 (1) ◽  
pp. 1-25
Author(s):  
Yongsen Ma ◽  
Sheheryar Arshad ◽  
Swetha Muniraju ◽  
Eric Torkildson ◽  
Enrico Rantala ◽  
...  

In recent years, Channel State Information (CSI) measured by WiFi is widely used for human activity recognition. In this article, we propose a deep learning design for location- and person-independent activity recognition with WiFi. The proposed design consists of three Deep Neural Networks (DNNs): a 2D Convolutional Neural Network (CNN) as the recognition algorithm, a 1D CNN as the state machine, and a reinforcement learning agent for neural architecture search. The recognition algorithm learns location- and person-independent features from different perspectives of CSI data. The state machine learns temporal dependency information from history classification results. The reinforcement learning agent optimizes the neural architecture of the recognition algorithm using a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM). The proposed design is evaluated in a lab environment with different WiFi device locations, antenna orientations, sitting/standing/walking locations/orientations, and multiple persons. The proposed design has 97% average accuracy when testing devices and persons are not seen during training. The proposed design is also evaluated by two public datasets with accuracy of 80% and 83%. The proposed design needs very little human efforts for ground truth labeling, feature engineering, signal processing, and tuning of learning parameters and hyperparameters.


2012 ◽  
Vol 151 ◽  
pp. 498-502
Author(s):  
Jin Xue Zhang ◽  
Hai Zhu Pan

This paper is concerned with Q-learning , a very popular algorithm for reinforcement learning ,for obstacle avoidance through neural networks. The principle tells that the focus always must be on both ecological nice tasks and behaviours when designing on robot. Many robot systems have used behavior-based systems since the 1980’s.In this paper, the Khepera robot is trained through the proposed algorithm of Q-learning using the neural networks for the task of obstacle avoidance. In experiments with real and simulated robots, the neural networks approach can be used to make it possible for Q-learning to handle changes in the environment.


Sign in / Sign up

Export Citation Format

Share Document