Safe reinforcement learning for real-time automatic control in a smart energy-hub

2022 ◽  
Vol 309 ◽  
pp. 118403
Author(s):  
Dawei Qiu ◽  
Zihang Dong ◽  
Xi Zhang ◽  
Yi Wang ◽  
Goran Strbac
Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3864
Author(s):  
Tarek Ghoul ◽  
Tarek Sayed

Speed advisories are used on highways to inform vehicles of upcoming changes in traffic conditions and apply a variable speed limit to reduce traffic conflicts and delays. This study applies a similar concept to intersections with respect to connected vehicles to provide dynamic speed advisories in real-time that guide vehicles towards an optimum speed. Real-time safety evaluation models for signalized intersections that depend on dynamic traffic parameters such as traffic volume and shock wave characteristics were used for this purpose. The proposed algorithm incorporates a rule-based approach alongside a Deep Deterministic Policy Gradient reinforcement learning technique (DDPG) to assign ideal speeds for connected vehicles at intersections and improve safety. The system was tested on two intersections using real-world data and yielded an average reduction in traffic conflicts ranging from 9% to 23%. Further analysis was performed to show that the algorithm yields tangible results even at lower market penetration rates (MPR). The algorithm was tested on the same intersection with different traffic volume conditions as well as on another intersection with different physical constraints and characteristics. The proposed algorithm provides a low-cost approach that is not computationally intensive and works towards optimizing for safety by reducing rear-end traffic conflicts.


2020 ◽  
Vol 53 (2) ◽  
pp. 15602-15607
Author(s):  
Jeevan Raajan ◽  
P V Srihari ◽  
Jayadev P Satya ◽  
B Bhikkaji ◽  
Ramkrishna Pasumarthy

Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2534
Author(s):  
Oualid Doukhi ◽  
Deok-Jin Lee

Autonomous navigation and collision avoidance missions represent a significant challenge for robotics systems as they generally operate in dynamic environments that require a high level of autonomy and flexible decision-making capabilities. This challenge becomes more applicable in micro aerial vehicles (MAVs) due to their limited size and computational power. This paper presents a novel approach for enabling a micro aerial vehicle system equipped with a laser range finder to autonomously navigate among obstacles and achieve a user-specified goal location in a GPS-denied environment, without the need for mapping or path planning. The proposed system uses an actor–critic-based reinforcement learning technique to train the aerial robot in a Gazebo simulator to perform a point-goal navigation task by directly mapping the noisy MAV’s state and laser scan measurements to continuous motion control. The obtained policy can perform collision-free flight in the real world while being trained entirely on a 3D simulator. Intensive simulations and real-time experiments were conducted and compared with a nonlinear model predictive control technique to show the generalization capabilities to new unseen environments, and robustness against localization noise. The obtained results demonstrate our system’s effectiveness in flying safely and reaching the desired points by planning smooth forward linear velocity and heading rates.


NeuroImage ◽  
2014 ◽  
Vol 88 ◽  
pp. 113-124 ◽  
Author(s):  
Emma J. Lawrence ◽  
Li Su ◽  
Gareth J. Barker ◽  
Nick Medford ◽  
Jeffrey Dalton ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document