scholarly journals 3D Visual SLAM Based on Multiple Iterative Closest Point

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
Chunguang Li ◽  
Chongben Tao ◽  
Guodong Liu

With the development of novel RGB-D visual sensors, data association has been a basic problem in 3D Visual Simultaneous Localization and Mapping (VSLAM). To solve the problem, a VSLAM algorithm based on Multiple Iterative Closest Point (MICP) is presented. By using both RGB and depth information obtained from RGB-D camera, 3D models of indoor environment can be reconstructed, which provide extensive knowledge for mobile robots to accomplish tasks such as VSLAM and Human-Robot Interaction. Due to the limited views of RGB-D camera, additional information about the camera pose is needed. In this paper, the motion of the RGB-D camera is estimated by a motion capture system after a calibration process. Based on the estimated pose, the MICP algorithm is used to improve the alignment. A Kinect mobile robot which is running Robot Operating System and the motion capture system has been used for experiments. Experiment results show that not only the proposed VSLAM algorithm achieved good accuracy and reliability, but also the 3D map can be generated in real time.


i-com ◽  
2017 ◽  
Vol 16 (2) ◽  
pp. 71-85
Author(s):  
Philipp Graf ◽  
Manuela Marquardt ◽  
Diego Compagna

AbstractWe conducted a Human-Robot Interaction (HRI) study during a science event, using a mixed method experimental approach with quantitative and qualitative data (adapted version of Godspeed Questionnaire and audio-visual material analysed videographically). The main purpose of the research was to gather insight into the relevance of the so-called “point of interaction” for a successful and user-friendly interaction with a non-anthropomorphic robot. We elaborate on this concept with reference to sociological theories under the heading of “addressability” and “social address” and generate hypotheses informed by former research and theoretical reflections. We implement an interface on our robot system, comprising two LEDs, which indicate the status of the robot/interaction, and which might possibly serve as basal form of embodied social address. In one experimental condition, the movements were accompanied by a light choreography, the other one was conducted without the LEDs. Our findings suggest a potential relevance of social address for the interaction partner to receive additional information, especially if the situation is a contingent one. Nevertheless, the overall rating on the Godspeed scales showed no significant differences between the light conditions. Several possible reasons for this are discussed. Limitations and advantages are pointed out in the conclusion.



2014 ◽  
Vol 989-994 ◽  
pp. 2651-2654
Author(s):  
Yan Song ◽  
Bo He

In this paper, a novel feature-based real-time visual Simultaneous localization and mapping (SLAM) system is proposed. This system generates colored 3-D reconstruction models and 3-D estimated trajectory using a Kinect style camera. Microsoft Kinect, a low priced 3-D camera, is the only sensor we use in our experiment. Kinect style sensors give RGB-D (red-green-blue depth) data which contains 2D image and per-pixel depth information. ORB (Oriented FAST and Rotated BRIEF) is the algorithm used to extract image features for speed up the whole system. Our system can be used to generate 3-D detailed reconstruction models. Furthermore, an estimated 3D trajectory of the sensor is given in this paper. The results of the experiments demonstrate that our system performs robustly and effectively in both getting detailed 3D models and mapping camera trajectory.



2020 ◽  
Vol 10 (19) ◽  
pp. 6995
Author(s):  
Jing Qi ◽  
Xilun Ding ◽  
Weiwei Li ◽  
Zhonghua Han ◽  
Kun Xu

Hand postures and speech are convenient means of communication for humans and can be used in human–robot interaction. Based on structural and functional characteristics of our integrated leg-arm hexapod robot, to perform reconnaissance and rescue tasks in public security application, a method of linkage of movement and manipulation of robots is proposed based on the visual and auditory channels, and a system based on hand postures and speech recognition is described. The developed system contains: a speech module, hand posture module, fusion module, mechanical structure module, control module, path planning module and a 3D SLAM (Simultaneous Localization and Mapping) module. In this system, three modes, i.e., the hand posture mode, speech mode, and a combination of the hand posture and speech modes, are used in different situations. The hand posture mode is used for reconnaissance tasks, and the speech mode is used to query the path and control the movement and manipulation of the robot. The combination of the two modes can be used to avoid ambiguity during interaction. A semantic understanding-based task slot structure is developed by using the visual and auditory channels. In addition, a method of task planning based on answer-set programming is developed, and a system of network-based data interaction is designed to control movements of the robot using Chinese instructions remotely based on a wide area network. Experiments were carried out to verify the performance of the proposed system.



2014 ◽  
Vol 31 (8) ◽  
pp. 1709-1719
Author(s):  
Ming-Yuan Shieh ◽  
Chung-Yu Hsieh ◽  
Tsung-Min Hsieh

Purpose – The purpose of this paper is to propose a fast object detection algorithm based on structural light analysis, which aims to detect and recognize human gesture and pose and then to conclude the respective commands for human-robot interaction control. Design/methodology/approach – In this paper, the human poses are estimated and analyzed by the proposed scheme, and then the resultant data concluded by the fuzzy decision-making system are used to launch respective robotic motions. The RGB camera and the infrared light module aim to do distance estimation of a body or several bodies. Findings – The modules not only provide image perception but also objective skeleton detection. In which, a laser source in the infrared light module emits invisible infrared light which passes through a filter and is scattered into a semi-random but constant pattern of small dots which is projected onto the environment in front of the sensor. The reflected pattern is then detected by an infrared camera and analyzed for depth estimation. Since the depth of object is a key parameter for pose recognition, one can estimate the distance to each dot and then get depth information by calculation of distance between emitter and receiver. Research limitations/implications – Future work will consider to reduce the computation time for objective estimation and to tune parameters adaptively. Practical implications – The experimental results demonstrate the feasibility of the proposed system. Originality/value – This paper achieves real-time human-robot interaction by visual detection based on structural light analysis.



Author(s):  
Rajesh Kannan Megalingam ◽  
Motheram Manaswini ◽  
Jahnavi Yannam ◽  
Vignesh S Naick ◽  
Gutlapalli Nikhil Chowdary


Sign in / Sign up

Export Citation Format

Share Document