Game-Theoretic Intent Negotiation in a Haptic Shared Control Paradigm

Author(s):  
Vahid Izadi ◽  
Amir H. Ghasemi
Author(s):  
Daniel Saraphis ◽  
Vahid Izadi ◽  
Amirhossein Ghasemi

Abstract In this paper, we aim to develop a shared control framework wherein the control authority is dynamically allocated between the human operator and the automation system. To this end, we have defined a shared control paradigm wherein the blending mechanism uses the confidence between a human and co-robot to allocate the control authority. To capture the confidence between the human and robot, qualitatively, a simple-but-generic model is presented wherein the confidence of human-to-robot and robot-to-human is a function of the human’s performance and robot’s performance. The computed confidence will then be used to adjust the level of autonomy between the two agents dynamically. To validate our novel framework, we propose case studies in which the steering control of a semi-automated system is shared between the human and onboard automation systems. The numerical simulations demonstrate the effectiveness of the proposed shared control paradigms.


Author(s):  
Amir H. Ghasemi

Haptic shared control is expected to achieve a smooth collaboration between humans and automated systems, because haptics facilitate mutual communication. This paper addresses a the interaction between the human driver and automation system in a haptic shared control framework using a non-cooperative model predictive game approach. In particular, we focused on a scenario in which both human and automation system detect an obstacle but select different paths for avoiding it. For such a scenario, the open-loop Nash steering control solution is derived and the influence of the human driver’s impedance and path following weights on the vehicle trajectory are investigated. It is shown that by modulating the impedance and the path following weight the control authority can be shifted between the human driver and the automation system.


2020 ◽  
Vol 39 (9) ◽  
pp. 1138-1154
Author(s):  
Kathleen Fitzsimons ◽  
Aleksandra Kalinowska ◽  
Julius P Dewald ◽  
Todd D Murphey

Despite the fact that robotic platforms can provide both consistent practice and objective assessments of users over the course of their training, there are relatively few instances where physical human–robot interaction has been significantly more effective than unassisted practice or human-mediated training. This article describes a hybrid shared control robot, which enhances task learning through kinesthetic feedback. The assistance assesses user actions using a task-specific evaluation criterion and selectively accepts or rejects them at each time instant. Through two human subject studies (total [Formula: see text]), we show that this hybrid approach of switching between full transparency and full rejection of user inputs leads to increased skill acquisition and short-term retention compared with unassisted practice. Moreover, we show that the shared control paradigm exhibits features previously shown to promote successful training. It avoids user passivity by only rejecting user actions and allowing failure at the task. It improves performance during assistance, providing meaningful task-specific feedback. It is sensitive to initial skill of the user and behaves as an “assist-as-needed” control scheme, adapting its engagement in real time based on the performance and needs of the user. Unlike other successful algorithms, it does not require explicit modulation of the level of impedance or error amplification during training and it is permissive to a range of strategies because of its evaluation criterion. We demonstrate that the proposed hybrid shared control paradigm with a task-based minimal intervention criterion significantly enhances task-specific training.


Author(s):  
Mark Zolotas ◽  
Yiannis Demiris

Robots supplied with the ability to infer human intent have many applications in assistive robotics. In these applications, robots rely on accurate models of human intent to administer appropriate assistance. However, the effectiveness of this assistance also heavily depends on whether the human can form accurate mental models of robot behaviour. The research problem is to therefore establish a transparent interaction, such that both the robot and human understand each other’s underlying "intent". We situate this problem in our Explainable Shared Control paradigm and present ongoing efforts to achieve transparency in human-robot collaboration.


2017 ◽  
pp. 120-130
Author(s):  
A. Lyasko

Informal financial operations exist in the shadow of official regulation and cannot be protected by the formal legal instruments, therefore raising concerns about the enforcement of obligations taken by their participants. This paper analyzes two alternative types of auxiliary institutions, which can coordinate expectations of the members of informal value transfer systems, namely attitudes of trust and norms of social control. It offers some preliminary approaches to creating a game-theoretic model of partner interaction in the informal value transfer system. It also sheds light on the perspectives of further studies in this area of institutional economics.


2018 ◽  
pp. 114-131
Author(s):  
O. Yu. Bondarenko

his article explores theoretical and experimental approach to modeling social interactions. Communication and exchange of information with other people affect individual’s behavior in numerous areas. Generally, such influence is exerted by leaders, outstanding individuals who have a higher social status or expert knowledge. Social interactions are analyzed in the models of social learning, game theoretic models, conformity models, etc. However, there is a lack of formal models of asymmetric interactions. Such models could help elicit certain qualities characterizing higher social status and perception of status by other individuals, find the presence of leader influence and analyze its mechanism.


Sign in / Sign up

Export Citation Format

Share Document