Adaptive Haptic Shared Control Framework Using Markov Decision Processing

Author(s):  
Amir H. Ghasemi ◽  
Hossein Rastgoftar

Semi-autonomous steering promises to combine the best of human perception, planning, and manual control with the precision of automatic control. This paper presents an adaptive haptic shared control scheme using Markov Decision Process (MDP) to keep human drivers in the loop yet free attention and avoid automation pitfalls. Using MDP, algorithms are developed to support the negotiation of authority between the human driver and automation system.

Author(s):  
Amir H. Ghasemi

Haptic shared control is expected to achieve a smooth collaboration between humans and automated systems, because haptics facilitate mutual communication. This paper addresses a the interaction between the human driver and automation system in a haptic shared control framework using a non-cooperative model predictive game approach. In particular, we focused on a scenario in which both human and automation system detect an obstacle but select different paths for avoiding it. For such a scenario, the open-loop Nash steering control solution is derived and the influence of the human driver’s impedance and path following weights on the vehicle trajectory are investigated. It is shown that by modulating the impedance and the path following weight the control authority can be shifted between the human driver and the automation system.


2018 ◽  
Vol 37 (7) ◽  
pp. 717-742 ◽  
Author(s):  
Shervin Javdani ◽  
Henny Admoni ◽  
Stefania Pellegrinelli ◽  
Siddhartha S. Srinivasa ◽  
J. Andrew Bagnell

In shared autonomy, a user and autonomous system work together to achieve shared goals. To collaborate effectively, the autonomous system must know the user’s goal. As such, most prior works follow a predict-then-act model, first predicting the user’s goal with high confidence, then assisting given that goal. Unfortunately, confidently predicting the user’s goal may not be possible until they have nearly achieved it, causing predict-then-act methods to provide little assistance. However, the system can often provide useful assistance even when confidence for any single goal is low (e.g. move towards multiple goals). In this work, we formalize this insight by modeling shared autonomy as a partially observable Markov decision process (POMDP), providing assistance that minimizes the expected cost-to-go with an unknown goal. As solving this POMDP optimally is intractable, we use hindsight optimization to approximate. We apply our framework to both shared-control teleoperation and human–robot teaming. Compared with predict-then-act methods, our method achieves goals faster, requires less user input, decreases user idling time, and results in fewer user–robot collisions.


Mathematics ◽  
2021 ◽  
Vol 9 (12) ◽  
pp. 1385
Author(s):  
Irais Mora-Ochomogo ◽  
Marco Serrato ◽  
Jaime Mora-Vargas ◽  
Raha Akhavan-Tabatabaei

Natural disasters represent a latent threat for every country in the world. Due to climate change and other factors, statistics show that they continue to be on the rise. This situation presents a challenge for the communities and the humanitarian organizations to be better prepared and react faster to natural disasters. In some countries, in-kind donations represent a high percentage of the supply for the operations, which presents additional challenges. This research proposes a Markov Decision Process (MDP) model to resemble operations in collection centers, where in-kind donations are received, sorted, packed, and sent to the affected areas. The decision addressed is when to send a shipment considering the uncertainty of the donations’ supply and the demand, as well as the logistics costs and the penalty of unsatisfied demand. As a result of the MDP a Monotone Optimal Non-Decreasing Policy (MONDP) is proposed, which provides valuable insights for decision-makers within this field. Moreover, the necessary conditions to prove the existence of such MONDP are presented.


Author(s):  
Yuanchen Zeng ◽  
Dongli Song ◽  
Weihua Zhang ◽  
Bin Zhou ◽  
Mingyuan Xie ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document