Sensor fusion based human detection and tracking system for human-robot interaction

Author(s):  
Kai Siang Ong ◽  
Yuan Han Hsu ◽  
Li Chen Fu
2019 ◽  
Vol E102.B (4) ◽  
pp. 708-721
Author(s):  
Toshihiro KITAJIMA ◽  
Edwardo Arata Y. MURAKAMI ◽  
Shunsuke YOSHIMOTO ◽  
Yoshihiro KURODA ◽  
Osamu OSHIRO

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Yipeng Zhu ◽  
Tao Wang ◽  
Shiqiang Zhu

Purpose This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and monocular cameras for perception. A prototype of human following robot is developed and evaluated by using the proposed tracking system. Design/methodology/approach Limited by angular resolution, point clouds from MMW radars are too sparse to form features for human detection. Monocular cameras can provide semantic information for objects in view, but cannot provide spatial locations. Considering the complementarity of the two sensors, a sensor fusion algorithm based on multimodal data combination is proposed to identify and localize the target person under challenging conditions. In addition, a closed-loop controller is designed for the robot to follow the target person with expected distance. Findings A series of experiments under different circumstances are carried out to validate the fusion-based tracking method. Experimental results show that the average tracking errors are around 0.1 m. It is also found that the robot can handle different situations and overcome short-term interference, continually track and follow the target person. Originality/value This paper proposed a robust tracking system with the fusion of MMW radars and cameras. Interference such as occlusion and overlapping are well handled with the help of the velocity information from the radars. Compared to other state-of-the-art plans, the sensor fusion method is cost-effective and requires no additional tags with people. Its stable performance shows good application prospects in human following robots.


2019 ◽  
Vol 13 (3) ◽  
pp. 2998-3009 ◽  
Author(s):  
Apidet Booranawong ◽  
Nattha Jindapetch ◽  
Hiroshi Saito

Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5419
Author(s):  
Pieter Try ◽  
Steffen Schöllmann ◽  
Lukas Wöhle ◽  
Marion Gebhard

People with severe motor impairments like tetraplegia are restricted in activities of daily living (ADL) and are dependent on continuous human assistance. Assistive robots perform physical tasks in the context of ADLs to support people in need of assistance. In this work a sensor fusion algorithm and a robot control algorithm for localizing the user’s mouth and autonomously navigating a robot arm are proposed for the assistive drinking task. The sensor fusion algorithm is implemented in a visual tracking system which consists of a 2-D camera and a single point time-of-flight distance sensor. The sensor fusion algorithm utilizes computer vision to combine camera images and distance measurements to achieve reliable localization of the user’s mouth. The robot control algorithm uses visual servoing to navigate a robot-handled drinking cup to the mouth and establish physical contact with the lips. This system features an abort command that is triggered by turning the head and unambiguous tracking of multiple faces which enable safe human robot interaction. A study with nine able-bodied test subjects shows that the proposed system reliably localizes the mouth and is able to autonomously navigate the cup to establish physical contact with the mouth.


2020 ◽  
Vol 4 (4) ◽  
pp. 27
Author(s):  
Liang Cheng Chang ◽  
Shreya Pare ◽  
Mahendra Singh Meena ◽  
Deepak Jain ◽  
Dong Lin Li ◽  
...  

At present, traditional visual-based surveillance systems are becoming impractical, inefficient, and time-consuming. Automation-based surveillance systems appeared to overcome these limitations. However, the automatic systems have some challenges such as occlusion and retaining images smoothly and continuously. This research proposes a weighted resampling particle filter approach for human tracking to handle these challenges. The primary functions of the proposed system are human detection, human monitoring, and camera control. We used the codebook matching algorithm to define the human region as a target and track it, and we used the practical filter algorithm to follow and extract the target information. Consequently, the obtained information was used to configure the camera control. The experiments were tested in various environments to prove the stability and performance of the proposed system based on the active camera.


Sign in / Sign up

Export Citation Format

Share Document