scholarly journals Implementation of Human Robot Interaction on UDOO Board

Author(s):  
Min Raj Nepali ◽  
Priyanka C Karthik ◽  
Jharna Majumdar

<p>Advanced Robot for Interactive Application (ARIA) is a Humanoid Robotic Head which is capable of mimicking Various Human Facial Expressions. Much work has been done on Implementation of Humanoid Robotic Head with High end systems and Personal Computers (PCs). This paper presents the essential elements necessary for the implementation of Advanced Robot for Interactive Application (ARIA) on UDOO Board. The main aim of the Project was to develop a control system and Graphical User Interface (GUI) for ARIA to deliver real time human facial expressions using embedded board. Humanoid Robotic Head which is capable of mimicking Human Facial Expressions in Real time. Implementation of ARIA   involved careful selection of Embedded Board, actuators, control algorithms, motor drivers, operating system, communication protocols, and programming languages. The Board contains a Quad Core A9 Processor and a Controller embedded on it, which are interconnected. In this project the controller is dedicated to control micro servo motors which are controlling eyes, eyebrows and eyelids movements whereas the Processor Handles the Dynamixel motors, GUI and different communication modules.</p>

Author(s):  
Vignesh Prasad ◽  
Ruth Stock-Homburg ◽  
Jan Peters

AbstractFor some years now, the use of social, anthropomorphic robots in various situations has been on the rise. These are robots developed to interact with humans and are equipped with corresponding extremities. They already support human users in various industries, such as retail, gastronomy, hotels, education and healthcare. During such Human-Robot Interaction (HRI) scenarios, physical touch plays a central role in the various applications of social robots as interactive non-verbal behaviour is a key factor in making the interaction more natural. Shaking hands is a simple, natural interaction used commonly in many social contexts and is seen as a symbol of greeting, farewell and congratulations. In this paper, we take a look at the existing state of Human-Robot Handshaking research, categorise the works based on their focus areas, draw out the major findings of these areas while analysing their pitfalls. We mainly see that some form of synchronisation exists during the different phases of the interaction. In addition to this, we also find that additional factors like gaze, voice facial expressions etc. can affect the perception of a robotic handshake and that internal factors like personality and mood can affect the way in which handshaking behaviours are executed by humans. Based on the findings and insights, we finally discuss possible ways forward for research on such physically interactive behaviours.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6438
Author(s):  
Chiara Filippini ◽  
David Perpetuini ◽  
Daniela Cardone ◽  
Arcangelo Merla

An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.


Author(s):  
Matthias Scheutz ◽  
Paul Schermerhorn

Effective decision-making under real-world conditions can be very difficult as purely rational methods of decision-making are often not feasible or applicable. Psychologists have long hypothesized that humans are able to cope with time and resource limitations by employing affective evaluations rather than rational ones. In this chapter, we present the distributed integrated affect cognition and reflection architecture DIARC for social robots intended for natural human-robot interaction and demonstrate the utility of its human-inspired affect mechanisms for the selection of tasks and goals. Specifically, we show that DIARC incorporates affect mechanisms throughout the architecture, which are based on “evaluation signals” generated in each architectural component to obtain quick and efficient estimates of the state of the component, and illustrate the operation and utility of these mechanisms with examples from human-robot interaction experiments.


Sign in / Sign up

Export Citation Format

Share Document