scholarly journals Improvement of Vehicle Stability Using Reinforcement Learning

2018 ◽  
Author(s):  
Janaína R. Amaral ◽  
Harald Göllinger ◽  
Thiago A. Fiorentin

This paper presents a preliminary study on the use of reinforcement learning to control the torque vectoring of a small rear wheel driven electric race car in order to improve vehicle handling and vehicle stability. The reinforcement learning algorithm used is Neural Fitted Q Iteration and the sampling of experiences is based on simulations of the vehicle behavior using the software CarMaker. The cost function is based on the position of the states on the phase-plane of sideslip angle and sideslip angular velocity. The resulting controller is able to improve the vehicle handling and stability with a significant reduction in vehicle sideslip angle.

Author(s):  
Krisada Kritayakirana ◽  
J. Christian Gerdes

This paper describes the algorithms used for controlling an autonomous vehicle that operates at the limits of tire adhesion. The controller is designed to imitate a racecar driver by using both feedforward and feedback to command the steering, throttle, and brakes of the vehicle. The feedforward steering is based on the vehicle handling diagram, while the lanekeeping steering feedback is added to ensure vehicle stability and reduces tracking errors caused by disturbances or modeling errors. The feedforward speed is estimated based on the available friction, while the proportional speed feedback is introduced to mimic a race-car driver modulating the speed to trim the vehicle orientation. Two different speed feedback designs based on lookahead error and heading error are compared. The experiments demonstrate the superiority of heading error feedback, which enables the vehicle to operate at its limits while maintaining minimal lateral and heading errors from the desired trajectory.


2021 ◽  
Author(s):  
Peter Wurman ◽  
Samuel Barrett ◽  
Kenta Kawamoto ◽  
James MacGlashan ◽  
Kaushik Subramanian ◽  
...  

Abstract Many potential applications of artificial intelligence involve making real-time decisions in physical systems. Automobile racing represents an extreme case of real-time decision making in close proximity to other highly-skilled drivers while near the limits of vehicular control. Racing simulations, such as the PlayStation game Gran Turismo, faithfully reproduce the nonlinear control challenges of real race cars while also encapsulating the complex multi-agent interactions. We attack, and solve for the first time, the simulated racing challenge using model-free deep reinforcement learning. We introduce a novel reinforcement learning algorithm and enhance the learning process with mixed scenario training to encourage the agent to incorporate racing tactics into an integrated control policy. In addition, we construct a reward function that enables the agent to adhere to the sport's under-specified racing etiquette rules. We demonstrate the capabilities of our agent, GT Sophy, by winning two of three races against four of the world's best Gran Turismo drivers and being competitive in the overall team score. By showing that these techniques can be successfully used to train championship-level race car drivers, we open up the possibility of their use in other complex dynamical systems and real-world applications.


Electronics ◽  
2018 ◽  
Vol 7 (12) ◽  
pp. 394 ◽  
Author(s):  
Michele Vignati ◽  
Edoardo Sabbioni ◽  
Federico Cheli

When dealing with electric vehicles, different powertrain layouts can be exploited. Among them, the most interesting one in terms of vehicle lateral dynamics is represented by the one with independent electric motors: two or four electric motors. This allows torque-vectoring control strategies to be applied for increasing vehicle lateral performance and stability. In this paper, a novel control strategy based on torque-vectoring is used to design a drifting control that helps the driver in controlling the vehicle in such a condition. Drift is a particular cornering condition in which high values of sideslip angle are obtained and maintained during the turn. The controller is applied to a rear-wheel drive race car prototype with two independent electric motors on the rear axle. The controller relies only on lateral acceleration, yaw rate, and vehicle speed measurement. This makes it independent from state estimators, which can affect its performance and robustness.


Author(s):  
Ahmet Kirli ◽  
Chinedum E. Okwudire ◽  
A. Galip Ulsoy

There is growing interest in steer-by-wire (SBW) systems because they provide significant benefits to classical vehicles, and are indispensable to autonomous vehicles. However, an emergency backup strategy is needed to steer a vehicle to safety if its SBW actuators fail completely. Differential drive assisted steering (DDAS), which uses torque vectoring to steer a vehicle, has been proposed as a backup strategy for SBW systems. However, vehicle stability control (VSC) — a required feature in most modern vehicles — also relies on torque vectoring. This paper demonstrates, for the first time, through simulations, that conflicts may arise between VSC and DDAS, rendering DDAS ineffective as a SBW system backup strategy. This preliminary study motivates the need to pay attention to, and develop strategies for addressing these conflicts. As an example, the addition of speed control to DDAS is shown as a potential strategy for mitigating these conflicts.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 471
Author(s):  
Jai Hoon Park ◽  
Kang Hoon Lee

Designing novel robots that can cope with a specific task is a challenging problem because of the enormous design space that involves both morphological structures and control mechanisms. To this end, we present a computational method for automating the design of modular robots. Our method employs a genetic algorithm to evolve robotic structures as an outer optimization, and it applies a reinforcement learning algorithm to each candidate structure to train its behavior and evaluate its potential learning ability as an inner optimization. The size of the design space is reduced significantly by evolving only the robotic structure and by performing behavioral optimization using a separate training algorithm compared to that when both the structure and behavior are evolved simultaneously. Mutual dependence between evolution and learning is achieved by regarding the mean cumulative rewards of a candidate structure in the reinforcement learning as its fitness in the genetic algorithm. Therefore, our method searches for prospective robotic structures that can potentially lead to near-optimal behaviors if trained sufficiently. We demonstrate the usefulness of our method through several effective design results that were automatically generated in the process of experimenting with actual modular robotics kit.


Sign in / Sign up

Export Citation Format

Share Document