scholarly journals Task-based hybrid shared control for training through forceful interaction

2020 ◽  
Vol 39 (9) ◽  
pp. 1138-1154
Author(s):  
Kathleen Fitzsimons ◽  
Aleksandra Kalinowska ◽  
Julius P Dewald ◽  
Todd D Murphey

Despite the fact that robotic platforms can provide both consistent practice and objective assessments of users over the course of their training, there are relatively few instances where physical human–robot interaction has been significantly more effective than unassisted practice or human-mediated training. This article describes a hybrid shared control robot, which enhances task learning through kinesthetic feedback. The assistance assesses user actions using a task-specific evaluation criterion and selectively accepts or rejects them at each time instant. Through two human subject studies (total [Formula: see text]), we show that this hybrid approach of switching between full transparency and full rejection of user inputs leads to increased skill acquisition and short-term retention compared with unassisted practice. Moreover, we show that the shared control paradigm exhibits features previously shown to promote successful training. It avoids user passivity by only rejecting user actions and allowing failure at the task. It improves performance during assistance, providing meaningful task-specific feedback. It is sensitive to initial skill of the user and behaves as an “assist-as-needed” control scheme, adapting its engagement in real time based on the performance and needs of the user. Unlike other successful algorithms, it does not require explicit modulation of the level of impedance or error amplification during training and it is permissive to a range of strategies because of its evaluation criterion. We demonstrate that the proposed hybrid shared control paradigm with a task-based minimal intervention criterion significantly enhances task-specific training.

2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Asif Khan ◽  
Jian Ping Li ◽  
Amin ul Haq ◽  
Shah Nazir ◽  
Naeem Ahmad ◽  
...  

The most common use of robots is to effectively decrease the human’s effort with desirable output. In the human-robot interaction, it is essential for both parties to predict subsequent actions based on their present actions so as to well complete the cooperative work. A lot of effort has been devoted in order to attain cooperative work between human and robot precisely. In case of decision making , it is observed from the previous studies that short-term or midterm forecasting have long time horizon to adjust and react. To address this problem, we suggested a new vision-based interaction model. The suggested model reduces the error amplification problem by applying the prior inputs through their features, which are repossessed by a deep belief network (DBN) though Boltzmann machine (BM) mechanism. Additionally, we present a mechanism to decide the possible outcome (accept or reject). The said mechanism evaluates the model on several datasets. Hence, the systems would be able to capture the related information using the motion of the objects. And it updates this information for verification, tracking, acquisition, and extractions of images in order to adapt the situation. Furthermore, we have suggested an intelligent purifier filter (IPF) and learning algorithm based on vision theories in order to make the proposed approach stronger. Experiments show the higher performance of the proposed model compared to the state-of-the-art methods.


Author(s):  
Daniel Saraphis ◽  
Vahid Izadi ◽  
Amirhossein Ghasemi

Abstract In this paper, we aim to develop a shared control framework wherein the control authority is dynamically allocated between the human operator and the automation system. To this end, we have defined a shared control paradigm wherein the blending mechanism uses the confidence between a human and co-robot to allocate the control authority. To capture the confidence between the human and robot, qualitatively, a simple-but-generic model is presented wherein the confidence of human-to-robot and robot-to-human is a function of the human’s performance and robot’s performance. The computed confidence will then be used to adjust the level of autonomy between the two agents dynamically. To validate our novel framework, we propose case studies in which the steering control of a semi-automated system is shared between the human and onboard automation systems. The numerical simulations demonstrate the effectiveness of the proposed shared control paradigms.


Author(s):  
Robert Z. Szasz ◽  
Mihai Mihaescu ◽  
Laszlo Fuchs

The acoustic field generated by flow unsteadiness in a model annular gas turbine (GT) combustion chamber is determined using a hybrid approach. In the flow solver the semi-compressible Navier-Stokes equations are resolved using LES as turbulence model. The acoustic solver is based on an inhomogeneous wave equation, where the instantaneous source terms are computed from the LES data in each time-instant. The flow and the acoustics in a GT combustor with co- and counter-rotating swirler burners have been considered. The results have shown that significant differences can be observed between the co- and counter-rotating configurations both in the flow and acoustic fields.


Sensors ◽  
2020 ◽  
Vol 20 (10) ◽  
pp. 2863
Author(s):  
Joaquin Ballesteros ◽  
Francisco Pastor ◽  
Jesús M. Gómez-de-Gabriel ◽  
Juan M. Gandarias ◽  
Alfonso J. García-Cerezo ◽  
...  

In physical Human–Robot Interaction (pHRI), forces exerted by humans need to be estimated to accommodate robot commands to human constraints, preferences, and needs. This paper presents a method for the estimation of the interaction forces between a human and a robot using a gripper with proprioceptive sensing. Specifically, we measure forces exerted by a human limb grabbed by an underactuated gripper in a frontal plane using only the gripper’s own sensors. This is achieved via a regression method, trained with experimental data from the values of the phalanx angles and actuator signals. The proposed method is intended for adaptive shared control in limb manipulation. Although adding force sensors provides better performance, the results obtained are accurate enough for this application. This approach requires no additional hardware: it relies uniquely on the gripper motor feedback—current, position and torque—and joint angles. Also, it is computationally cheap, so processing times are low enough to allow continuous human-adapted pHRI for shared control.


2019 ◽  
Vol 40 (1) ◽  
pp. 105-117
Author(s):  
Yanan Li ◽  
Keng Peng Tee ◽  
Rui Yan ◽  
Shuzhi Sam Ge

Purpose This paper aims to propose a general framework of shared control for human–robot interaction. Design/methodology/approach Human dynamics are considered in analysis of the coupled human–robot system. Motion intentions of both human and robot are taken into account in the control objective of the robot. Reinforcement learning is developed to achieve the control objective subject to unknown dynamics of human and robot. The closed-loop system performance is discussed through a rigorous proof. Findings Simulations are conducted to demonstrate the learning capability of the proposed method and its feasibility in handling various situations. Originality/value Compared to existing works, the proposed framework combines motion intentions of both human and robot in a human–robot shared control system, without the requirement of the knowledge of human’s and robot’s dynamics.


Sign in / Sign up

Export Citation Format

Share Document