Human Posture Recognition for Estimation of Human Body Condition

Author(s):  
Wei Quan ◽  
Jinseok Woo ◽  
Yuichiro Toda ◽  
Naoyuki Kubota ◽  
◽  
...  

Human posture recognition has been a popular research topic since the development of the referent fields of human-robot interaction, and simulation operation. Most of these methods are based on supervised learning, and a large amount of training information is required to conduct an ideal assessment. In this study, we propose a solution to this by applying a number of unsupervised learning algorithms based on the forward kinematics model of the human skeleton. Next, we optimize the proposed method by integrating particle swarm optimization (PSO) for optimization. The advantage of the proposed method is no pre-training data is that required for human posture generation and recognition. We validate the method by conducting a series of experiments with human subjects.

2021 ◽  
Vol 10 (3) ◽  
pp. 1-25
Author(s):  
Ajung Moon ◽  
Maneezhay Hashmi ◽  
H. F. Machiel Van Der Loos ◽  
Elizabeth A. Croft ◽  
Aude Billard

When the question of who should get access to a communal resource first is uncertain, people often negotiate via nonverbal communication to resolve the conflict. What should a robot be programmed to do when such conflicts arise in Human-Robot Interaction? The answer to this question varies depending on the context of the situation. Learning from how humans use hesitation gestures to negotiate a solution in such conflict situations, we present a human-inspired design of nonverbal hesitation gestures that can be used for Human-Robot Negotiation. We extracted characteristic features of such negotiative hesitations humans use, and subsequently designed a trajectory generator (Negotiative Hesitation Generator) that can re-create the features in robot responses to conflicts. Our human-subjects experiment demonstrates the efficacy of the designed robot behaviour against non-negotiative stopping behaviour of a robot. With positive results from our human-robot interaction experiment, we provide a validated trajectory generator with which one can explore the dynamics of human-robot nonverbal negotiation of resource conflicts.


AI Magazine ◽  
2011 ◽  
Vol 32 (4) ◽  
pp. 53-63 ◽  
Author(s):  
Andrea L. Thomaz ◽  
Crystal Chao

Turn-taking is a fundamental part of human communication. Our goal is to devise a turn-taking framework for human-robot interaction that, like the human skill, represents something fundamental about interaction, generic to context or domain. We propose a model of turn-taking, and conduct an experiment with human subjects to inform this model. Our findings from this study suggest that information flow is an integral part of human floor-passing behavior. Following this, we implement autonomous floor relinquishing on a robot and discuss our insights into the nature of a general turn-taking model for human-robot interaction.


2018 ◽  
Vol 9 (1) ◽  
pp. 221-234 ◽  
Author(s):  
João Avelino ◽  
Tiago Paulino ◽  
Carlos Cardoso ◽  
Ricardo Nunes ◽  
Plinio Moreno ◽  
...  

Abstract Handshaking is a fundamental part of human physical interaction that is transversal to various cultural backgrounds. It is also a very challenging task in the field of Physical Human-Robot Interaction (pHRI), requiring compliant force control in order to plan the arm’s motion and for a confident, but at the same time pleasant grasp of the human user’s hand. In this paper,we focus on the study of the hand grip strength for comfortable handshakes and perform three sets of physical interaction experiments between twenty human subjects in the first experiment, thirty-five human subjects in the second one, and thirty-eight human subjects in the third one. Tests are made with a social robot whose hands are instrumented with tactile sensors that provide skin-like sensation. From these experiments, we: (i) learn the preferred grip closure according to each user group; (ii) analyze the tactile feedback provided by the sensors for each closure; (iii) develop and evaluate the hand grip controller based on previous data. In addition to the robot-human interactions, we also learn about the robot executed handshake interactions with inanimate objects, in order to detect if it is shaking hands with a human or an inanimate object. This work adds physical human-robot interaction to the repertory of social skills of our robot, fulfilling a demand previously identified by many users of the robot.


Sensors ◽  
2016 ◽  
Vol 16 (1) ◽  
pp. 36 ◽  
Author(s):  
Uriel Hernandez-Belmonte ◽  
Victor Ayala-Ramirez

2020 ◽  
Vol 32 (1) ◽  
pp. 128-135
Author(s):  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Hiroshi Ishiguro ◽  
◽  

In human-human interaction, social touch provides several merits, from both physical and mental perspectives. The physical existence of robots helps them reproduce human-like social touch, during their interaction with people. Such social touch shows positive effects, similar to those observed in human-human interaction. Therefore, social touch is a growing research topic in the field of human-robot interaction. This survey provides an overview of the work conducted so far on this topic.


2014 ◽  
Vol 2014 ◽  
pp. 1-5 ◽  
Author(s):  
Jizheng Yan ◽  
Zhiliang Wang ◽  
Yan Yan

Emotional robots are always the focus of artificial intelligence (AI), and intelligent control of robot facial expression is a hot research topic. This paper focuses on the design of humanoid robot head, which is divided into three steps to achieve. The first step is to solve the uncanny valley about humanoid robot, to find and avoid the relationship between human being and robot; the second step is to solve the association between human face and robot head; compared with human being and robots, we analyze the similarities and differences and explore the same basis and mechanisms between robot and human analyzing the Facial Action Coding System (FACS), which guides us to achieve humanoid expressions. On the basis of the previous two steps, the third step is to construct a robot head; through a series of experiments we test the robot head, which could show some humanoid expressions; through human-robot interaction, we find people are surprised by the robot head expression and feel happy.


2022 ◽  
Vol 8 ◽  
Author(s):  
Niyati Rawal ◽  
Dorothea Koert ◽  
Cigdem Turan ◽  
Kristian Kersting ◽  
Jan Peters ◽  
...  

The ability of a robot to generate appropriate facial expressions is a key aspect of perceived sociability in human-robot interaction. Yet many existing approaches rely on the use of a set of fixed, preprogrammed joint configurations for expression generation. Automating this process provides potential advantages to scale better to different robot types and various expressions. To this end, we introduce ExGenNet, a novel deep generative approach for facial expressions on humanoid robots. ExGenNets connect a generator network to reconstruct simplified facial images from robot joint configurations with a classifier network for state-of-the-art facial expression recognition. The robots’ joint configurations are optimized for various expressions by backpropagating the loss between the predicted expression and intended expression through the classification network and the generator network. To improve the transfer between human training images and images of different robots, we propose to use extracted features in the classifier as well as in the generator network. Unlike most studies on facial expression generation, ExGenNets can produce multiple configurations for each facial expression and be transferred between robots. Experimental evaluations on two robots with highly human-like faces, Alfie (Furhat Robot) and the android robot Elenoide, show that ExGenNet can successfully generate sets of joint configurations for predefined facial expressions on both robots. This ability of ExGenNet to generate realistic facial expressions was further validated in a pilot study where the majority of human subjects could accurately recognize most of the generated facial expressions on both the robots.


Author(s):  
Jaeryoung Lee ◽  
Goro Obinata ◽  
Dimitar Stefanov ◽  
Chikara Nagai

AbstractInteractive robots are seen as an efficient tool for the improvement of the social communication skills of autistic children. Recent studies show that the effectiveness of the human-robot interaction can be improved further if the robot can provide positive feedback to the child when he/she demonstrates behaviour or social skills as expected. However, there is no clear answer to which visual stimuli and which combination of visual stimuli could attract better attention. In this paper we present initial results from our study of the response of participants with autism traits to four visual stimuli. We conducted a series of experiments where the experimental system provided a visual response to the user’s actions and monitored the user’s performance for each visual stimulus. The experiments were organised as a game and included four groups of participants with different levels of autism. The results showed that a colour tended to be the most effective way for robot interaction with autistic people. The results could help the design of very effective assistive robots for supporting people with autism.


Author(s):  
Gabriele Trovato ◽  
Massimiliano Zecca ◽  
Salvatore Sessa ◽  
Lorenzo Jamone ◽  
Jaap Ham ◽  
...  

AbstractAs witnessed in several behavioural studies, a complex relationship exists between people’s cultural background and their general acceptance towards robots. However, very few studies have investigated whether a robot’s original language and gesture based on certain culture have an impact on the people of the different cultures. The purpose of this work is to provide experimental evidence which supports the idea that humans may accept more easily a robot that can adapt to their specific culture. Indeed, improving acceptance and reducing discomfort is fundamental for future deployment of robots as assistive, health-care or companion devices into a society. We conducted a Human- Robot Interaction experiment both in Egypt and in Japan. Human subjects were engaged in a simulated video conference with robots that were greeting and speaking either in Arabic or in Japanese. The subjects completed a questionnaire assessing their preferences and their emotional state, while their spontaneous reactions were recorded in different ways. The results suggest that Egyptians prefer the Arabic robot, while they feel a sense of discomfort when interacting with the Japanese robot; the opposite is also true for the Japanese. These findings confirm the importance of the localisation of a robot in order to improve human acceptance during social human-robot interaction.


Sign in / Sign up

Export Citation Format

Share Document