An Human-Robot Interaction Control Architecture for an Intelligent Assistive Robot

Author(s):  
Maurizio Ficocelli ◽  
Goldie Nejat ◽  
Greg Minseok Jhin

As the first round of baby boomers turn 65 in 2011, we must be prepared for the largest demographic group in history that could need long term care from nursing homes and home health providers. The development of socially assistive robots for health care applications can provide measurable improvements in patient safety, quality of care, and operational efficiencies by playing an increasingly important role in patient care in the fast pace of crowded clinics, hospitals and nursing/veterans homes. However, there are a number of research issues that need to be addressed in order to design such robots. In this paper, we address one of the main limitations to the development of intelligent socially assistive robots for health care applications: Robotic control architecture design and implementation with explicit social and assistive task functionalities. In particular, we present the design of a unique learning-based multi-layer decision making control architecture for utilization in determining the appropriate behavior of the robot. Herein, we explore and compare two different learning-based techniques that can be utilized as the main decision-making module of the controller. Preliminary experiments presented show the potential of the integration of the aforementioned techniques into the overall design of such robots intended for assistive scenarios.

Author(s):  
Junichi Terao ◽  
Lina Trejos ◽  
Zhe Zhang ◽  
Goldie Nejat

The development of socially assistive robots for health care applications can provide measurable improvements in patient safety, quality of care, and operational efficiencies by playing an increasingly important role in patient care in the fast pace of crowded clinics, hospitals and nursing/veterans homes. However, there are a number of research issues that need to be addressed in order to design such robots. In this paper, we address two main limitations to the development of intelligent socially assistive robots: (i) identification of human body language via a non-contact sensory system and categorization of these gestures for determining the accessibility level of a person during human-robot interaction, and (ii) decision making control architecture design for determining the learning-based task-driven behavior of the robot during assistive interaction. Preliminary experiments presented show the potential of the integration of the aforementioned techniques into the overall design of such robots intended for assistive scenarios.


Author(s):  
Zhe Zhang ◽  
Goldie Nejat

A new novel breed of robots known as socially assistive robots is emerging. These robots are capable of providing assistance to individuals through social and cognitive interaction. The development of socially assistive robots for health care applications can provide measurable improvements in patient safety, quality of care, and operational efficiencies by playing an increasingly important role in patient care in the fast pace of crowded clinics, hospitals and nursing/veterans homes. However, there are a number of research issues that need to be addressed in order to design such robots. In this paper, we address one main challenge in the development of intelligent socially assistive robots: The robot’s ability to identify, understand and react to human intent and human affective states during assistive interaction. In particular, we present a unique non-contact and non-restricting sensory-based approach for identification and categorization of human body language in determining the affective state of a person during natural real-time human-robot interaction. This classification allows the robot to effectively determine its taskdriven behavior during assistive interaction. Preliminary experiments show the potential of integrating the proposed gesture recognition and classification technique into intelligent socially assistive robotic systems for autonomous interactions with people.


2011 ◽  
Vol 08 (01) ◽  
pp. 103-126 ◽  
Author(s):  
JEANIE CHAN ◽  
GOLDIE NEJAT ◽  
JINGCONG CHEN

Recently, there has been a growing body of research that supports the effectiveness of using non-pharmacological cognitive and social training interventions to reduce the decline of or improve brain functioning in individuals suffering from cognitive impairments. However, implementing and sustaining such interventions on a long-term basis is difficult as they require considerable resources and people, and can be very time-consuming for healthcare staff. Our research focuses on making these interventions more accessible to healthcare professionals through the aid of robotic assistants. The objective of our work is to develop an intelligent socially assistive robot with abilities to recognize and identify human affective intent to determine its own appropriate emotion-based behavior while engaging in assistive interactions with people. In this paper, we present the design of a novel human-robot interaction (HRI) control architecture that allows the robot to provide social and cognitive stimulation in person-centered cognitive interventions. Namely, the novel control architecture is designed to allow a robot to act as a social motivator by encouraging, congratulating and assisting a person during the course of a cognitively stimulating activity. Preliminary experiments validate the effectiveness of the control architecture in providing assistive interactions during a HRI-based person-directed activity.


2021 ◽  
Vol 5 (11) ◽  
pp. 71
Author(s):  
Ela Liberman-Pincu ◽  
Amit David ◽  
Vardit Sarne-Fleischmann ◽  
Yael Edan ◽  
Tal Oron-Gilad

This study examines the effect of a COVID-19 Officer Robot (COR) on passersby compliance and the effects of its minor design manipulations on human–robot interaction. A robotic application was developed to ensure participants entering a public building comply with COVID restrictions of a green pass and wearing a face mask. The participants’ attitudes toward the robot and their perception of its authoritativeness were explored with video and questionnaires data. Thematic analysis was used to define unique behaviors related to human–COR interaction. Direct and extended interactions with minor design manipulation of the COR were evaluated in a public scenario setting. The results demonstrate that even minor design manipulations may influence users’ attitudes toward officer robots. The outcomes of this research can support manufacturers in rapidly adjusting their robots to new domains and tasks and guide future designs of authoritative socially assistive robots (SARs).


10.2196/13729 ◽  
2019 ◽  
Vol 21 (6) ◽  
pp. e13729 ◽  
Author(s):  
Meia Chita-Tegmark ◽  
Janet M Ackerman ◽  
Matthias Scheutz

Background As robots are increasingly designed for health management applications, it is critical to not only consider the effects robots will have on patients but also consider a patient’s wider social network, including the patient’s caregivers and health care providers, among others. Objective In this paper we investigated how people evaluate robots that provide care and how they form impressions of the patient the robot cares for, based on how the robot represents the patient. Methods We have used a vignette-based study, showing participants hypothetical scenarios describing behaviors of assistive robots (patient-centered or task-centered) and measured their influence on people’s evaluations of the robot itself (emotional intelligence [EI], trustworthiness, and acceptability) as well as people’s perceptions of the patient for whom the robot provides care. Results We found that for scenarios describing a robot that acts in a patient-centered manner, the robot will not only be perceived as having higher EI (P=.003) but will also cause people to form more positive impressions of the patient that the robot cares for (P<.001). We replicated and expanded these results to other domains such as dieting, learning, and job training. Conclusions These results imply that robots could be used to enhance human-human relationships in the health care context and beyond.


Author(s):  
Yvonne Rosehart

IntroductionCIHI’s Population Grouping Methodology uses data from multiple sectors to create clinical profiles and to predict the entire population’s current and future morbidity burden and healthcare utilization. Outputs from the grouper can be applied to healthcare decision making and planning processes. Objectives and ApproachThe population grouping methodology starts with everyone who is eligible for healthcare, including those who haven’t interacted with the healthcare system, providing a true picture of the entire population. The grouper uses diagnosis information over a 2-year period to create health profiles and predict individuals’ future morbidity and expected use of primary care, emergency department and long-term care services. Predictive models were developed using age, sex, health conditions and the most influential health condition interactions as the predictors. These models produce predictive indicators for the concurrent period as well as one year into the future. ResultsThe power of the model lies in the user’s ability to aggregate the data by population segments and compare healthcare resource utilization by different geographic regions, health sectors and health status. The presentation will focus on how CIHI’s population grouping methodology helps client’s monitor population health and conduct disease surveillance. It assists clients with population segmentation, health profiling, predicting health care utilization patterns and explaining variation in health care resource use. It can be used for risk adjustment of populations for inter-jurisdictional analysis, for capacity planning and it can also be used as a component in funding models. Conclusion/ImplicationsCIHI’s population grouping methodology is a useful tool for profiling and predicting healthcare utilization, with key applications for health policy makers, planners and funders. The presentation will focus on how stakeholders can apply the outputs to aid in their decision-making and planning processes.


Author(s):  
Derek McColl ◽  
Goldie Nejat

Socially assistive robots can engage in assistive human-robot interactions (HRI) by providing rehabilitation of cognitive, social, and physical abilities after a stroke, accident or diagnosis of a social, developmental or cognitive disorder. However, there are a number of research issues that need to be addressed in order to design such robots. In this paper, we address one main challenge in the development of intelligent socially assistive robots: A robot’s ability to identify human non-verbal communication during assistive interactions. In this paper, we present a unique non-contact automated sensory-based approach for identification and categorization of human upper body language in determining how accessible a person is to a robot during natural real-time HRI. This classification will allow a robot to effectively determine its own reactive task-driven behavior during assistive interactions. The types of interactions envisioned include providing reminders, health monitoring, and social and cognitive therapies. Preliminary experiments show the potential of integrating the proposed body language recognition and classification technique into socially assistive robotic systems partaking in HRI scenarios.


Sensors ◽  
2020 ◽  
Vol 20 (11) ◽  
pp. 3033
Author(s):  
Soheil Keshmiri ◽  
Masahiro Shiomi ◽  
Hidenobu Sumioka ◽  
Takashi Minato ◽  
Hiroshi Ishiguro

Touch plays a crucial role in humans’ nonverbal social and affective communication. It then comes as no surprise to observe a considerable effort that has been placed on devising methodologies for automated touch classification. For instance, such an ability allows for the use of smart touch sensors in such real-life application domains as socially-assistive robots and embodied telecommunication. In fact, touch classification literature represents an undeniably progressive result. However, these results are limited in two important ways. First, they are mostly based on overall (i.e., average) accuracy of different classifiers. As a result, they fall short in providing an insight on performance of these approaches as per different types of touch. Second, they do not consider the same type of touch with different level of strength (e.g., gentle versus strong touch). This is certainly an important factor that deserves investigating since the intensity of a touch can utterly transform its meaning (e.g., from an affectionate gesture to a sign of punishment). The current study provides a preliminary investigation of these shortcomings by considering the accuracy of a number of classifiers for both, within- (i.e., same type of touch with differing strengths) and between-touch (i.e., different types of touch) classifications. Our results help verify the strength and shortcoming of different machine learning algorithms for touch classification. They also highlight some of the challenges whose solution concepts can pave the path for integration of touch sensors in such application domains as human–robot interaction (HRI).


Sign in / Sign up

Export Citation Format

Share Document