scholarly journals On the Integration of Adaptive and Interactive Robotic Smart Spaces

Author(s):  
Mauro Dragone ◽  
Joe Saunders ◽  
Kerstin Dautenhahn

AbstractEnabling robots to seamlessly operate as part of smart spaces is an important and extended challenge for robotics R&D and a key enabler for a range of advanced robotic applications, such as AmbientAssisted Living (AAL) and home automation. The integration of these technologies is currently being pursued from two largely distinct view-points: On the one hand, people-centred initiatives focus on improving the user’s acceptance by tackling human-robot interaction (HRI) issues, often adopting a social robotic approach, and by giving to the designer and - in a limited degree – to the final user(s), control on personalization and product customisation features. On the other hand, technologically-driven initiatives are building impersonal but intelligent systems that are able to pro-actively and autonomously adapt their operations to fit changing requirements and evolving users’ needs, but which largely ignore and do not leverage human-robot interaction and may thus lead to poor user experience and user acceptance. In order to inform the development of a new generation of smart robotic spaces, this paper analyses and compares different research strands with a view to proposing possible integrated solutions with both advanced HRI and online adaptation capabilities.

AI Magazine ◽  
2017 ◽  
Vol 37 (4) ◽  
pp. 83-88
Author(s):  
Christopher Amato ◽  
Ofra Amir ◽  
Joanna Bryson ◽  
Barbara Grosz ◽  
Bipin Indurkhya ◽  
...  

The Association for the Advancement of Artificial Intelligence, in cooperation with Stanford University's Department of Computer Science, presented the 2016 Spring Symposium Series on Monday through Wednesday, March 21-23, 2016 at Stanford University. The titles of the seven symposia were (1) AI and the Mitigation of Human Error: Anomalies, Team Metrics and Thermodynamics; (2) Challenges and Opportunities in Multiagent Learning for the Real World (3) Enabling Computing Research in Socially Intelligent Human-Robot Interaction: A Community-Driven Modular Research Platform; (4) Ethical and Moral Considerations in Non-Human Agents; (5) Intelligent Systems for Supporting Distributed Human Teamwork; (6) Observational Studies through Social Media and Other Human-Generated Content, and (7) Well-Being Computing: AI Meets Health and Happiness Science.


2021 ◽  
Vol 12 (1) ◽  
pp. 258
Author(s):  
Marek Čorňák ◽  
Michal Tölgyessy ◽  
Peter Hubinský

The concept of “Industry 4.0” relies heavily on the utilization of collaborative robotic applications. As a result, the need for an effective, natural, and ergonomic interface arises, as more workers will be required to work with robots. Designing and implementing natural forms of human–robot interaction (HRI) is key to ensuring efficient and productive collaboration between humans and robots. This paper presents a gestural framework for controlling a collaborative robotic manipulator using pointing gestures. The core principle lies in the ability of the user to send the robot’s end effector to the location towards, which he points to by his hand. The main idea is derived from the concept of so-called “linear HRI”. The framework utilizes a collaborative robotic arm UR5e and the state-of-the-art human body tracking sensor Leap Motion. The user is not required to wear any equipment. The paper describes the overview of the framework’s core method and provides the necessary mathematical background. An experimental evaluation of the method is provided, and the main influencing factors are identified. A unique robotic collaborative workspace called Complex Collaborative HRI Workplace (COCOHRIP) was designed around the gestural framework to evaluate the method and provide the basis for the future development of HRI applications.


Micromachines ◽  
2018 ◽  
Vol 9 (11) ◽  
pp. 576 ◽  
Author(s):  
Gaoyang Pang ◽  
Jia Deng ◽  
Fangjinhua Wang ◽  
Junhui Zhang ◽  
Zhibo Pang ◽  
...  

For industrial manufacturing, industrial robots are required to work together with human counterparts on certain special occasions, where human workers share their skills with robots. Intuitive human–robot interaction brings increasing safety challenges, which can be properly addressed by using sensor-based active control technology. In this article, we designed and fabricated a three-dimensional flexible robot skin made by the piezoresistive nanocomposite based on the need for enhancement of the security performance of the collaborative robot. The robot skin endowed the YuMi robot with a tactile perception like human skin. The developed sensing unit in the robot skin showed the one-to-one correspondence between force input and resistance output (percentage change in impedance) in the range of 0–6.5 N. Furthermore, the calibration result indicated that the developed sensing unit is capable of offering a maximum force sensitivity (percentage change in impedance per Newton force) of 18.83% N−1 when loaded with an external force of 6.5 N. The fabricated sensing unit showed good reproducibility after loading with cyclic force (0–5.5 N) under a frequency of 0.65 Hz for 3500 cycles. In addition, to suppress the bypass crosstalk in robot skin, we designed a readout circuit for sampling tactile data. Moreover, experiments were conducted to estimate the contact/collision force between the object and the robot in a real-time manner. The experiment results showed that the implemented robot skin can provide an efficient approach for natural and secure human–robot interaction.


2021 ◽  
Vol 11 (8) ◽  
pp. 3468
Author(s):  
Jaeryoung Lee

The use of affective speech in robotic applications has increased in recent years, especially regarding the developments or studies of emotional prosody for a specific group of people. The current work proposes a prosody-based communication system that considers the limited parameters found in speech recognition for the elderly, for example. This work explored what types of voices were more effective for understanding presented information, and if the affects of robot voices reflected on the emotional states of listeners. By using functions of a small humanoid robot, two different experiments conducted to find out comprehension level and the affective reflection respectively. University students participated in both tests. The results showed that affective voices helped the users understand the information, as well as that they felt corresponding negative emotions in conversations with negative voices.


Author(s):  
Dylan F. Glas ◽  
Koji Kamei ◽  
Takayuki Kanda ◽  
Takahiro Miyashita ◽  
Norihiro Hagita

2019 ◽  
Author(s):  
Diego Cardoso Alves ◽  
Paula Dornhofer Paro Costa

Human-robot interaction imposes many challenges and artificial intelligence researchers are demanded to improve scene perception, social navigation and engagement. Great attention is being dedicated to the development of computer vision and multimodal sensing approaches that are focused on the evolution of social robotic systems and the improvement of social model accuracy. Most recent works related to social robotics rely on the engagement process with a focus on maintaining a previously established conversation. This work brings up the study of initial human-robot interaction contexts, proposing a system that is able to analyze a social scenario through the detection and analysis of persons and surrounding features in a scene. RGB and depth frames, as well as audio data, were used in order to achieve better performance in indoor scene monitoring and human behavior analysis.


Author(s):  
Samuel G. Collins ◽  
Goran Trajkovski

In this chapter, we give an overview of the results of a Human-Robot Interaction experiment, in a near zerocontext environment. We stimulate the formation of a network joining together human agents and non-human agents, in order to examine emergent conditions and social actions. Human subjects, in teams of three to four, are presented with a task–to coax a robot (by any means) from one side of a table to the other–not knowing with what sensory and motor abilities the robotic structure is equipped. On the one hand, the “goal” of the exercise is to “move” the robot through any linguistic or paralinguistic means. But, from the perspective of the investigators, the goal is both broader and more nebulous–to stimulate any emergent interactions whatsoever between agents, human or non-human. Here we discuss emergent social phenomena in this assemblage of human and machine, in particular, turn-taking and discourse, suggesting (counter-intuitively) that the “transparency” of non-human agents may not be the most effective way to generate multi-agent sociality.


2019 ◽  
Vol 16 (1) ◽  
pp. 172988141881797 ◽  
Author(s):  
Mauricio E Reyes ◽  
Ivan V Meza ◽  
Luis A Pineda

The facial expression of angry emotion can be useful to direct the interaction between agents, especially in unclear and cluttered environments. During the presence of an angry face, a process of analysis and diagnosis is activated in the subject that notices it, which could impact its behavior toward the one who expresses the emotion. In order to study such an effect in human–robot interaction, an expressive robotics face was designed and constructed. The influence of this face on human action and attention was analyzed in two collaborative tasks. Results of a digital survey, experimental interaction, and a questionnaire indicated that anger is the best recognized universal facial expression, has a regulatory effect in human action, and induces human attention when an unclear condition arises during the task. An additional finding was that the prolonged presence of an angry face reduces its impact compared to positive expressions.


2020 ◽  
Vol 11 (1) ◽  
pp. 379-389
Author(s):  
Angelina Aleksandrovich ◽  
Leonardo Mariano Gomes

AbstractThis research explores multisensory sexual arousal in men and women, and how it can be implemented and shared between multiple individuals in Virtual Reality (VR). This is achieved through the stimulation of human senses with immersive technology including visual, olfactory, auditory, and haptic triggers. Participants are invited to VR to test various sensory triggers and assess them as sexually arousing or not. A literature review on VR experiments related to sexuality, the concepts of perception and multisensory experiments, and data collected from self-reports was used to conclude. The goal of this research is to establish that sexual arousal is a multisensory event that may or may not be linked to the presence or thought of the intended object of desire (sexual partner). By examining what stimulates arousal, we better understand the multisensory capacity of humans, leading not only to richer sexual experiences but also to the further development of wearable sextech products, soft robotics, and multisensory learning machines. This understanding helps with other research related to human-robot interaction, affection, detection, and transmission in both physical and virtual realities, and how VR technology can help to design a new generation of sex robots.


Sign in / Sign up

Export Citation Format

Share Document