Effects of Human Personal Space on the Robot Obstacle Avoidance Be havior: A Human-in-the-loop Assessment

Author(s):  
Yuhao Chen ◽  
Trevor Smith ◽  
Nathan Hewitt ◽  
Yu Gu ◽  
Boyi Hu

To ensure both the physical and mental safety of humans during human-robot interaction (HRI), a rich body of literature has been accumulated, and the notion of socially acceptable robot behaviors has arisen. To be specific, it requires the motion of robots not only to be physically collision-free but also to consider and respect the social conventions developed and enforced in the human social contexts. Among these social conventions, personal space, or proxemics, is one of the most commonly considered in the robot behavioral design. Nevertheless, most previous research efforts assumed that robots could generate human-like motions by merely mimicking a human. Rarely are the robot’s behavioral algorithms assessed and verified by human participants. Therefore, to fill the research gap, a Turing-like simulation test, which contains the interaction of two agents (each agent could be a human or a robot) in a shared space was conducted. Participants (33 in total) were asked to identify and label the category of those agents followed by questionnaires. Results revealed that people who had different attitudes and prior expectations of appropriate robot behaviors responded to the algorithm differently, and their identification accuracy varied significantly. In general, by considering personal space in the robot obstacle avoidance algorithm, robots could demonstrate more humanlike motion behaviors which are confirmed by human experiments.

Author(s):  
Yuhao Chen ◽  
Chizhao Yang ◽  
Bowen Song ◽  
Nicholas Gonzalez ◽  
Yu Gu ◽  
...  

Safety is of critical concern in human-robot collaboration. A human-in-the-loop experiment was conducted to assess the performance of the customized robot control scheme in a smart warehouse setting, where the human participants performed order picking and assembly tasks, and the mobile robot performed simulated pallet moving tasks. Three modes of human-robot interaction were tested (no robot involved, mobile robot with the empty payload and mobile robot with the full payload). Participants’ pupillary response, subjective workload as well as task performance (completion time and errors) were recorded and compared between different modes. Preliminary results indicated a slight decrease in human productivity (completion time increased from 191.4 to 204.1 seconds, p = 0.041) was well compensated by the relatively large gain from the robotics outputs. Additionally, human mental workload was negatively impacted (pupil diameter increased from 4.0 to 4.2 mm, p = 0.012) by introducing the robot into the shared workplace.


Author(s):  
Vignesh Prasad ◽  
Ruth Stock-Homburg ◽  
Jan Peters

AbstractFor some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.


Author(s):  
Margot M. E. Neggers ◽  
Raymond H. Cuijpers ◽  
Peter A. M. Ruijten ◽  
Wijnand A. IJsselsteijn

AbstractAutonomous mobile robots that operate in environments with people are expected to be able to deal with human proxemics and social distances. Previous research investigated how robots can approach persons or how to implement human-aware navigation algorithms. However, experimental research on how robots can avoid a person in a comfortable way is largely missing. The aim of the current work is to experimentally determine the shape and size of personal space of a human passed by a robot. In two studies, both a humanoid as well as a non-humanoid robot were used to pass a person at different sides and distances, after which they were asked to rate their perceived comfort. As expected, perceived comfort increases with distance. However, the shape was not circular: passing at the back of a person is more uncomfortable compared to passing at the front, especially in the case of the humanoid robot. These results give us more insight into the shape and size of personal space in human–robot interaction. Furthermore, they can serve as necessary input to human-aware navigation algorithms for autonomous mobile robots in which human comfort is traded off with efficiency goals.


Robotica ◽  
2017 ◽  
Vol 36 (4) ◽  
pp. 463-483 ◽  
Author(s):  
C. Ton ◽  
Z. Kan ◽  
S. S. Mehta

SUMMARYThis paper considers applications where a human agent is navigating a semi-autonomous mobile robot in an environment with obstacles. The human input to the robot can be based on a desired navigation objective, which may not be known to the robot. Additionally, the semi-autonomous robot can be programmed to ensure obstacle avoidance as it navigates the environment. A shared control architecture can be used to appropriately fuse the human and the autonomy inputs to obtain a net control input that drives the robot. In this paper, an adaptive, near-continuous control allocation function is included in the shared controller, which continuously varies the control effort exerted by the human and the autonomy based on the position of the robot relative to obstacles. The developed control allocation function facilitates the human to freely navigate the robot when away from obstacles, and it causes the autonomy control input to progressively dominate as the robot approaches obstacles. A harmonic potential field-based non-linear sliding mode controller is developed to obtain the autonomy control input for obstacle avoidance. In addition, a robust feed-forward term is included in the autonomy control input to maintain stability in the presence of adverse human inputs, which can be critical in applications such as to prevent collision or roll-over of smart wheelchairs due to erroneous human inputs. Lyapunov-based stability analysis is presented to guarantee finite-time stability of the developed shared controller, i.e., the autonomy guarantees obstacle avoidance as the human navigates the robot. Experimental results are provided to validate the performance of the developed shared controller.


Author(s):  
Ruyi Ge ◽  
Zhiqiang (Eric) Zheng ◽  
Xuan Tian ◽  
Li Liao

We study the human–robot interaction of financial-advising services in peer-to-peer lending (P2P). Many crowdfunding platforms have started using robo-advisors to help lenders augment their intelligence in P2P loan investments. Collaborating with one of the leading P2P companies, we examine how investors use robo-advisors and how the human adjustment of robo-advisor usage affects investment performance. Our analyses show that, somewhat surprisingly, investors who need more help from robo-advisors—that is, those encountered more defaults in their manual investing—are less likely to adopt such services. Investors tend to adjust their usage of the service in reaction to recent robo-advisor performance. However, interestingly, these human-in-the-loop interferences often lead to inferior performance.


2011 ◽  
Vol 56 (2) ◽  
pp. 349-394
Author(s):  
Dennis J. Baker

In this paper, I argue that principled criminalization does not have to rely on critical objectivity. It is not necessary to demonstrate that conduct is criminalizable only if it is wrong in a transcultural and truly correct sense. I argue that such standards are impossible to identify and that a sounder basis for criminalization decisions can be found by drawing on our deep conventional understandings of wrong. I argue that Feinberg’s harm principle can be supported with conventional accounts of harm, and that such harms can be identified as objectively harmful when measured against our deep conventional understandings of harm. The distinction that critical moralists make between truly harmful conduct and conventionally objective harmful conduct is unsustainable because many conventional harms impact real victims in social contexts. The best that we can do is to scrutinize our conventional conceptualizations of harm and badness, but that scrutiny is constrained by the limits of epistemological inquiry and our capacity for rationality at any given point in time. Many acts are criminalizable because they violate social conventions that are shareable by communally situated agents.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2699 ◽  
Author(s):  
Redhwan Algabri ◽  
Mun-Taek Choi

Human following is one of the fundamental functions in human–robot interaction for mobile robots. This paper shows a novel framework with state-machine control in which the robot tracks the target person in occlusion and illumination changes, as well as navigates with obstacle avoidance while following the target to the destination. People are detected and tracked using a deep learning algorithm, called Single Shot MultiBox Detector, and the target person is identified by extracting the color feature using the hue-saturation-value histogram. The robot follows the target safely to the destination using a simultaneous localization and mapping algorithm with the LIDAR sensor for obstacle avoidance. We performed intensive experiments on our human following approach in an indoor environment with multiple people and moderate illumination changes. Experimental results indicated that the robot followed the target well to the destination, showing the effectiveness and practicability of our proposed system in the given environment.


Author(s):  
Peter N. Squire ◽  
Raja Parasuraman

To achieve effective human-robot interaction (HRI) it is important to determine what types of supervisory control interfaces lead to optimal human-robot teaming. Research in HRI has demonstrated that operators controlling fewer robots against opponents of equal strength face greater challenges when control is restricted to only automation. Using human-in-the-loop evaluations of delegation-type interfaces, the present study examined the challenges and outcomes of a single operator supervising (1) more or less robots than a simulated adversary, with either a (2) flexible or restricted control interface. Testing was conducted with 12 paid participants using the RoboFlag simulation environment. Results from this experiment support past findings of execution timing deficiencies related to automation brittleness, and present new findings that indicate that successful teaming between a single human operator and a robotic team is affected by the number of robots and the type of interface.


Sign in / Sign up

Export Citation Format

Share Document