scholarly journals Routing of Electric Vehicles With Intermediary Charging Stations: A Reinforcement Learning Approach

2021 ◽  
Vol 4 ◽  
Author(s):  
Marina Dorokhova ◽  
Christophe Ballif ◽  
Nicolas Wyrsch

In the past few years, the importance of electric mobility has increased in response to growing concerns about climate change. However, limited cruising range and sparse charging infrastructure could restrain a massive deployment of electric vehicles (EVs). To mitigate the problem, the need for optimal route planning algorithms emerged. In this paper, we propose a mathematical formulation of the EV-specific routing problem in a graph-theoretical context, which incorporates the ability of EVs to recuperate energy. Furthermore, we consider a possibility to recharge on the way using intermediary charging stations. As a possible solution method, we present an off-policy model-free reinforcement learning approach that aims to generate energy feasible paths for EV from source to target. The algorithm was implemented and tested on a case study of a road network in Switzerland. The training procedure requires low computing and memory demands and is suitable for online applications. The results achieved demonstrate the algorithm’s capability to take recharging decisions and produce desired energy feasible paths.

Energies ◽  
2021 ◽  
Vol 14 (20) ◽  
pp. 6651
Author(s):  
Remigiusz Iwańkowicz

This paper addresses the problem of route planning for a fleet of electric vehicles departing from a depot and supplying customers with certain goods. This paper aims to present a permutation-based method of vehicle route coding adapted to the specificity of electric drive. The developed method integrated with an evolutionary algorithm allows for rapid generation of routes for multiple vehicles taking into account the necessity of supplying energy in available charging stations. The minimization of the route distance travelled by all vehicles was taken as a criterion. The performed testing indicated satisfactory computation speed. A real region with four charging stations and 33 customers was analysed. Different scenarios of demand were analysed, and factors affecting the results of the proposed calculation method were indicated. The limitations of the method were pointed out, mainly caused by assumptions that simplify the problem. In the future, it is planned for research and method development to include the lapse of time and for the set of factors influencing energy consumption by a moving vehicle to be extended.


Author(s):  
Francesco M. Solinas ◽  
Andrea Bellagarda ◽  
Enrico Macii ◽  
Edoardo Patti ◽  
Lorenzo Bottaccioli

2021 ◽  
Author(s):  
Andre Menezes ◽  
Pedro Vicente ◽  
Alexandre Bernardino ◽  
Rodrigo Ventura

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4468
Author(s):  
Ao Xi ◽  
Chao Chen

In this work, we introduced a novel hybrid reinforcement learning scheme to balance a biped robot (NAO) on an oscillating platform, where the rotation of the platform is considered as the external disturbance to the robot. The platform had two degrees of freedom in rotation, pitch and roll. The state space comprised the position of center of pressure, and joint angles and joint velocities of two legs. The action space consisted of the joint angles of ankles, knees, and hips. By adding the inverse kinematics techniques, the dimension of action space was significantly reduced. Then, a model-based system estimator was employed during the offline training procedure to estimate the dynamics model of the system by using novel hierarchical Gaussian processes, and to provide initial control inputs, after which the reduced action space of each joint was obtained by minimizing the cost of reaching the desired stable state. Finally, a model-free optimizer based on DQN (λ) was introduced to fine tune the initial control inputs, where the optimal control inputs were obtained for each joint at any state. The proposed reinforcement learning not only successfully avoided the distribution mismatch problem, but also improved the sample efficiency. Simulation results showed that the proposed hybrid reinforcement learning mechanism enabled the NAO robot to balance on an oscillating platform with different frequencies and magnitudes. Both control performance and robustness were guaranteed during the experiments.


2020 ◽  
Vol 10 (19) ◽  
pp. 6923 ◽  
Author(s):  
Cristian C. Beltran-Hernandez ◽  
Damien Petit ◽  
Ixchel G. Ramirez-Alpizar ◽  
Kensuke Harada

Industrial robot manipulators are playing a significant role in modern manufacturing industries. Though peg-in-hole assembly is a common industrial task that has been extensively researched, safely solving complex, high-precision assembly in an unstructured environment remains an open problem. Reinforcement-learning (RL) methods have proven to be successful in autonomously solving manipulation tasks. However, RL is still not widely adopted in real robotic systems because working with real hardware entails additional challenges, especially when using position-controlled manipulators. The main contribution of this work is a learning-based method to solve peg-in-hole tasks with hole-position uncertainty. We propose the use of an off-policy, model-free reinforcement-learning method, and we bootstraped the training speed by using several transfer-learning techniques (sim2real) and domain randomization. Our proposed learning framework for position-controlled robots was extensively evaluated in contact-rich insertion tasks in a variety of environments.


Author(s):  
Wenxing Liu ◽  
Hanlin Niu ◽  
Muhammad Nasiruddin Mahyuddin ◽  
Guido Herrmann ◽  
Joaquin Carrasco

Sign in / Sign up

Export Citation Format

Share Document