Robot Vision, Autonomous Vehicles, and Human Robot Interaction

Author(s):  
Katsushi Ikeuchi ◽  
Yasuyuki Matsushita ◽  
Ryusuke Sagawa ◽  
Hiroshi Kawasaki ◽  
Yasuhiro Mukaigawa ◽  
...  
Author(s):  
Giorgio Metta

This chapter outlines a number of research lines that, starting from the observation of nature, attempt to mimic human behavior in humanoid robots. Humanoid robotics is one of the most exciting proving grounds for the development of biologically inspired hardware and software—machines that try to recreate billions of years of evolution with some of the abilities and characteristics of living beings. Humanoids could be especially useful for their ability to “live” in human-populated environments, occupying the same physical space as people and using tools that have been designed for people. Natural human–robot interaction is also an important facet of humanoid research. Finally, learning and adapting from experience, the hallmark of human intelligence, may require some approximation to the human body in order to attain similar capacities to humans. This chapter focuses particularly on compliant actuation, soft robotics, biomimetic robot vision, robot touch, and brain-inspired motor control in the context of the iCub humanoid robot.


Author(s):  
Soo-Han Kang ◽  
Ji-Hyeong Han

AbstractRobot vision provides the most important information to robots so that they can read the context and interact with human partners successfully. Moreover, to allow humans recognize the robot’s visual understanding during human-robot interaction (HRI), the best way is for the robot to provide an explanation of its understanding in natural language. In this paper, we propose a new approach by which to interpret robot vision from an egocentric standpoint and generate descriptions to explain egocentric videos particularly for HRI. Because robot vision equals to egocentric video on the robot’s side, it contains as much egocentric view information as exocentric view information. Thus, we propose a new dataset, referred to as the global, action, and interaction (GAI) dataset, which consists of egocentric video clips and GAI descriptions in natural language to represent both egocentric and exocentric information. The encoder-decoder based deep learning model is trained based on the GAI dataset and its performance on description generation assessments is evaluated. We also conduct experiments in actual environments to verify whether the GAI dataset and the trained deep learning model can improve a robot vision system


2013 ◽  
pp. 257-280
Author(s):  
Wenjie Yan ◽  
Elena Torta ◽  
David van der Pol ◽  
Nils Meins ◽  
Cornelius Weber ◽  
...  

This chapter presents an overview of a typical scenario of Ambient Assisted Living (AAL) in which a robot navigates to a person for conveying information. Indoor robot navigation is a challenging task due to the complexity of real-home environments and the need of online learning abilities to adjust for dynamic conditions. A comparison between systems with different sensor typologies shows that vision-based systems promise to provide good performance and a wide scope of usage at reasonable cost. Moreover, vision-based systems can perform different tasks simultaneously by applying different algorithms to the input data stream thus enhancing the flexibility of the system. The authors introduce the state of the art of several computer vision methods for realizing indoor robotic navigation to a person and human-robot interaction. A case study has been conducted in which a robot, which is part of an AAL system, navigates to a person and interacts with her. The authors evaluate this test case and give an outlook on the potential of learning robot vision in ambient homes.


2019 ◽  
Vol 14 (1) ◽  
pp. 22-30
Author(s):  
Dongkeon Park ◽  
◽  
Kyeong-Min Kang ◽  
Jin-Woo Bae ◽  
Ji-Hyeong Han

2019 ◽  
Vol 16 (2) ◽  
pp. 172988141983959 ◽  
Author(s):  
Francisco Rubio ◽  
Francisco Valero ◽  
Carlos Llopis-Albert

Humanoid robots, unmanned rovers, entertainment pets, drones, and so on are great examples of mobile robots. They can be distinguished from other robots by their ability to move autonomously, with enough intelligence to react and make decisions based on the perception they receive from the environment. Mobile robots must have some source of input data, some way of decoding that input, and a way of taking actions (including its own motion) to respond to a changing world. The need to sense and adapt to an unknown environment requires a powerful cognition system. Nowadays, there are mobile robots that can walk, run, jump, and so on like their biological counterparts. Several fields of robotics have arisen, such as wheeled mobile robots, legged robots, flying robots, robot vision, artificial intelligence, and so on, which involve different technological areas such as mechanics, electronics, and computer science. In this article, the world of mobile robots is explored including the new trends. These new trends are led by artificial intelligence, autonomous driving, network communication, cooperative work, nanorobotics, friendly human–robot interfaces, safe human–robot interaction, and emotion expression and perception. Furthermore, these news trends are applied to different fields such as medicine, health care, sports, ergonomics, industry, distribution of goods, and service robotics. These tendencies will keep going their evolution in the coming years.


2013 ◽  
pp. 1232-1255 ◽  
Author(s):  
Wenjie Yan ◽  
Elena Torta ◽  
David van der Pol ◽  
Nils Meins ◽  
Cornelius Weber ◽  
...  

This chapter presents an overview of a typical scenario of Ambient Assisted Living (AAL) in which a robot navigates to a person for conveying information. Indoor robot navigation is a challenging task due to the complexity of real-home environments and the need of online learning abilities to adjust for dynamic conditions. A comparison between systems with different sensor typologies shows that vision-based systems promise to provide good performance and a wide scope of usage at reasonable cost. Moreover, vision-based systems can perform different tasks simultaneously by applying different algorithms to the input data stream thus enhancing the flexibility of the system. The authors introduce the state of the art of several computer vision methods for realizing indoor robotic navigation to a person and human-robot interaction. A case study has been conducted in which a robot, which is part of an AAL system, navigates to a person and interacts with her. The authors evaluate this test case and give an outlook on the potential of learning robot vision in ambient homes.


Sign in / Sign up

Export Citation Format

Share Document