A Socially Assistive Robot That Can Interpret Body Language

Author(s):  
Derek McColl ◽  
Goldie Nejat

Socially assistive robots can engage in assistive human-robot interactions (HRI) by providing rehabilitation of cognitive, social, and physical abilities after a stroke, accident or diagnosis of a social, developmental or cognitive disorder. However, there are a number of research issues that need to be addressed in order to design such robots. In this paper, we address one main challenge in the development of intelligent socially assistive robots: A robot’s ability to identify human non-verbal communication during assistive interactions. In this paper, we present a unique non-contact automated sensory-based approach for identification and categorization of human upper body language in determining how accessible a person is to a robot during natural real-time HRI. This classification will allow a robot to effectively determine its own reactive task-driven behavior during assistive interactions. The types of interactions envisioned include providing reminders, health monitoring, and social and cognitive therapies. Preliminary experiments show the potential of integrating the proposed body language recognition and classification technique into socially assistive robotic systems partaking in HRI scenarios.

Author(s):  
Zhe Zhang ◽  
Goldie Nejat

A new novel breed of robots known as socially assistive robots is emerging. These robots are capable of providing assistance to individuals through social and cognitive interaction. The development of socially assistive robots for health care applications can provide measurable improvements in patient safety, quality of care, and operational efficiencies by playing an increasingly important role in patient care in the fast pace of crowded clinics, hospitals and nursing/veterans homes. However, there are a number of research issues that need to be addressed in order to design such robots. In this paper, we address one main challenge in the development of intelligent socially assistive robots: The robot’s ability to identify, understand and react to human intent and human affective states during assistive interaction. In particular, we present a unique non-contact and non-restricting sensory-based approach for identification and categorization of human body language in determining the affective state of a person during natural real-time human-robot interaction. This classification allows the robot to effectively determine its taskdriven behavior during assistive interaction. Preliminary experiments show the potential of integrating the proposed gesture recognition and classification technique into intelligent socially assistive robotic systems for autonomous interactions with people.


Author(s):  
Maurizio Ficocelli ◽  
Goldie Nejat ◽  
Greg Minseok Jhin

As the first round of baby boomers turn 65 in 2011, we must be prepared for the largest demographic group in history that could need long term care from nursing homes and home health providers. The development of socially assistive robots for health care applications can provide measurable improvements in patient safety, quality of care, and operational efficiencies by playing an increasingly important role in patient care in the fast pace of crowded clinics, hospitals and nursing/veterans homes. However, there are a number of research issues that need to be addressed in order to design such robots. In this paper, we address one of the main limitations to the development of intelligent socially assistive robots for health care applications: Robotic control architecture design and implementation with explicit social and assistive task functionalities. In particular, we present the design of a unique learning-based multi-layer decision making control architecture for utilization in determining the appropriate behavior of the robot. Herein, we explore and compare two different learning-based techniques that can be utilized as the main decision-making module of the controller. Preliminary experiments presented show the potential of the integration of the aforementioned techniques into the overall design of such robots intended for assistive scenarios.


Author(s):  
Junichi Terao ◽  
Lina Trejos ◽  
Zhe Zhang ◽  
Goldie Nejat

The development of socially assistive robots for health care applications can provide measurable improvements in patient safety, quality of care, and operational efficiencies by playing an increasingly important role in patient care in the fast pace of crowded clinics, hospitals and nursing/veterans homes. However, there are a number of research issues that need to be addressed in order to design such robots. In this paper, we address two main limitations to the development of intelligent socially assistive robots: (i) identification of human body language via a non-contact sensory system and categorization of these gestures for determining the accessibility level of a person during human-robot interaction, and (ii) decision making control architecture design for determining the learning-based task-driven behavior of the robot during assistive interaction. Preliminary experiments presented show the potential of the integration of the aforementioned techniques into the overall design of such robots intended for assistive scenarios.


Author(s):  
Ala Addin I. Sidig ◽  
Hamzah Luqman ◽  
Sabri Mahmoud ◽  
Mohamed Mohandes

Sign language is the major means of communication for the deaf community. It uses body language and gestures such as hand shapes, lib patterns, and facial expressions to convey a message. Sign language is geography-specific, as it differs from one country to another. Arabic Sign language is used in all Arab countries. The availability of a comprehensive benchmarking database for ArSL is one of the challenges of the automatic recognition of Arabic Sign language. This article introduces KArSL database for ArSL, consisting of 502 signs that cover 11 chapters of ArSL dictionary. Signs in KArSL database are performed by three professional signers, and each sign is repeated 50 times by each signer. The database is recorded using state-of-art multi-modal Microsoft Kinect V2. We also propose three approaches for sign language recognition using this database. The proposed systems are Hidden Markov Models, deep learning images’ classification model applied on an image composed of shots of the video of the sign, and attention-based deep learning captioning system. Recognition accuracies of these systems indicate their suitability for such a large number of Arabic signs. The techniques are also tested on a publicly available database. KArSL database will be made freely available for interested researchers.


AI Magazine ◽  
2015 ◽  
Vol 36 (4) ◽  
pp. 23-33 ◽  
Author(s):  
Domen Novak ◽  
Robert Riener

Rehabilitation robots physically support and guide a patient's limb during motor therapy, but require sophisticated control algorithms and artificial intelligence to do so. This article provides an overview of the state of the art in this area. It begins with the dominant paradigm of assistive control, from impedance-based cooperative controller through electromyography and intention estimation. It then covers challenge-based algorithms, which provide more difficult and complex tasks for the patient to perform through resistive control and error augmentation. Furthermore, it describes exercise adaptation algorithms that change the overall exercise intensity based on the patient's performance or physiological responses, as well as socially assistive robots that provide only verbal and visual guidance. The article concludes with a discussion of the current challenges in rehabilitation robot software: evaluating existing control strategies in a clinical setting as well as increasing the robot's autonomy using entirely new artificial intelligence techniques.


2012 ◽  
Vol 13 (2) ◽  
pp. 114-120.e1 ◽  
Author(s):  
Roger Bemelmans ◽  
Gert Jan Gelderblom ◽  
Pieter Jonker ◽  
Luc de Witte

Sign in / Sign up

Export Citation Format

Share Document