head gestures
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 25)

H-INDEX

13
(FIVE YEARS 2)

2022 ◽  
Vol 3 ◽  
Author(s):  
Agnes Axelsson ◽  
Gabriel Skantze

Feedback is an essential part of all communication, and agents communicating with humans must be able to both give and receive feedback in order to ensure mutual understanding. In this paper, we analyse multimodal feedback given by humans towards a robot that is presenting a piece of art in a shared environment, similar to a museum setting. The data analysed contains both video and audio recordings of 28 participants, and the data has been richly annotated both in terms of multimodal cues (speech, gaze, head gestures, facial expressions, and body pose), as well as the polarity of any feedback (negative, positive, or neutral). We train statistical and machine learning models on the dataset, and find that random forest models and multinomial regression models perform well on predicting the polarity of the participants' reactions. An analysis of the different modalities shows that most information is found in the participants' speech and head gestures, while much less information is found in their facial expressions, body pose and gaze. An analysis of the timing of the feedback shows that most feedback is given when the robot makes pauses (and thereby invites feedback), but that the more exact timing of the feedback does not affect its meaning.


2021 ◽  
Vol 8 (4) ◽  
pp. 661-690
Author(s):  
Mariwan Asaad Samad ◽  
Nawzad Anwer Omar

   This research, is entitled (Speech Act Analysis for Head movement and gesture) this study is an attempt to analyze movement and gestures one of the parts of the humans body, which is Head, depending on the conditions and rules of the Speech Acts theory. The research consists of the introduction and two parts as follows:The first part: This part is devoted to the Speech Act theory, highlighting the history of the theory and the diagnosis of its most important features, with a number of classifications for main parts of this theory.The second part: This part is a practical part, which includes a number of movement or Head gestures. We analyzed the gesture or movement according to the theory of Speech Acts and applied the theory to all Head movement with specify the goals of each movement and head gesture.And the search ended with the most important results, with a list of sources.


ACTA IMEKO ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 81
Author(s):  
Joi Oh ◽  
Fumihiro Kato ◽  
Iwasaki Yukiko ◽  
Hiroyasu Iwata

This paper introduces a novel interface ‘3D head pointer’ for the operation of a wearable robotic arm in 3D space. The developed system is intended to assist its user in the execution of routine tasks while operating a robotic arm. Previous studies have demonstrated the difficulty a user faces in simultaneously controlling a robotic arm and their own hands. The proposed method combines a head-based pointing device and voice recognition to manipulate the position and orientation as well as to switch between these two modes. In a virtual reality environment, the position instructions of the proposed system and its usefulness were evaluated by measuring the accuracy of the instructions and the time required using a fully immersive head-mounted display (HMD). In addition, the entire system, including posture instructions with two switching methods (voice recognition and head gestures), was evaluated using an optical transparent HMD. The obtained results displayed an accuracy of 1.25 cm and 3.56° with the 20-s time span necessary for communicating an instruction. These results demonstrate that voice recognition is a more effective switching method than head gestures.


2021 ◽  
Author(s):  
Gijs Huisman ◽  
Tommaso Lisini Baldi ◽  
Nicole D'Aurizio ◽  
Domenico Prattichizzo

2021 ◽  
Vol 8 ◽  
Author(s):  
Anna-Maria Velentza ◽  
Nikolaos Fachantidis ◽  
Sofia Pliasa

The influence of human-care service robots in human–robot interaction is becoming of great importance, because of the roles that the robots are taking in today’s and future society. Thus, we need to identify how humans can interact, collaborate, and learn from social robots more efficiently. Additionally, it is important to determine the robots’ modalities that can increase the humans’ perceived likeness and knowledge acquisition and enhance human–robot collaboration. The present study aims to identify the optimal social service robots’ modalities that enhance the human learning process and level of enjoyment from the interaction and even attract the humans’ attention to choosing a robot to collaborate with it. Our target group was college students, pre-service teachers. For this purpose, we designed two experiments, each one split in two parts. Both the experiments were between groups, and human participants had the chance to watch the Nao robot performing a storytelling exercise about the history of robots in a museum-educational activity via video annotations. The robot’s modalities were manipulated on its body movements (expressive arm and head gestures) while performing the storytelling, friendly attitude expressions and storytelling, and personality traits. After the robot’s storytelling, participants filled out a knowledge acquisition questionnaire and a self-reported enjoyment level questionnaire. In the second part, we introduce the idea of participants witnessing a conversation between the robots with the different modalities, and they were asked to choose the robot with which they want to collaborate in a similar activity. Results indicated that participants prefer to collaborate with robots with a cheerful personality and expressive body movements. Especially when they were asked to choose between two robots that were cheerful and had expressive body movements, they preferred the one which originally told them the story. Moreover, participants did not prefer to collaborate with a robot with an extremely friendly attitude and storytelling style.


2021 ◽  
Author(s):  
Núria Esteve‐Gibert ◽  
Hélène Lœvenbruck ◽  
Marion Dohen ◽  
Mariapaola D'Imperio
Keyword(s):  

2021 ◽  
Vol 6 ◽  
Author(s):  
Melisa Stevanovic

Joint decision-making is a thoroughly collaborative interactional endeavor. To construct the outcome of the decision-making sequence as a “joint” one necessitates that the participants constantly negotiate their shared activity, not only with reference to the content of the decisions to be made, but also with reference to whether, when, and upon what exactly decisions are to be made in the first place. In this paper, I draw on a dataset of video-recorded dyadic planning meetings between two church officials as data, investigating a collection of 35 positive assessments with the Finnish particle ihan “quite” occurring in response to a proposal (e.g., tää on ihan kiva “this is quite nice”). The analysis focuses on the embodied delivery of these assessments in combination with their other features: their sequential location and immediate interactional consequences (i.e., accounts, decisions, abandoning of the proposal), their auxiliary verbal turn-design features (i.e., particles), and the “agent” of the proposals that they are responsive to (i.e., who has made the proposal and whether it is based on some written authoritative material). Three multimodal action packages are described, in which the assessment serves 1) to accept an idea in principle, which is combined with no speaker movement, 2) to concede to a plan, which is associated with notable expressive speaker movement (e.g., head gestures, facial expressions) and 3) to establish a joint decision, which is accompanied by the participants’ synchronous body movements. The paper argues that the relative decision-implicativeness of these three multimodal action packages is largely based on the management and distribution of participation and agency between the two participants, which involves the participants using their bodies to position themselves toward their co-participants and toward the proposals “in the air” in distinct ways.


Author(s):  
Nafisa Mapari ◽  
Abdullah Shaikh ◽  
Atik Shaikh ◽  
Zaid Siddiqui

Humans communicate with each other through natural language channels such as words and writing, or through body language (gestures) such as hand and head gestures, facial expression, lip motion, etc. There are some examples of natural languages that people use to communicate with each other. We all know that understanding natural language is essential, learning sign language is also very important. For disable people, sign language is the primary communication method for hearing. As there is no translator for communicating among them, so they face problems in communicating. So this is the motivation for us to create a system that recognizes sign language to impact deaf people's social lives significantly.


Sign in / Sign up

Export Citation Format

Share Document