3D Object Recognition and Relative Localization using a 3D sensor Embedded on a Mobile Robot

2020 ◽  
Author(s):  
Gopi Krishna Erabati

The technology in current research scenario is marching towards automation forhigher productivity with accurate and precise product development. Vision andRobotics are domains which work to create autonomous systems and are the keytechnology in quest for mass productivity. The automation in an industry canbe achieved by detecting interactive objects and estimating the pose to manipulatethem. Therefore the object localization ( i.e., pose) includes position andorientation of object, has profound ?significance. The application of object poseestimation varies from industry automation to entertainment industry and fromhealth care to surveillance. The objective of pose estimation of objects is verysigni?cant in many cases, like in order for the robots to manipulate the objects,for accurate rendering of Augmented Reality (AR) among others.This thesis tries to solve the issue of object pose estimation using 3D dataof scene acquired from 3D sensors (e.g. Kinect, Orbec Astra Pro among others).The 3D data has an advantage of independence from object texture and invarianceto illumination. The proposal is divided into two phases : An o?ine phasewhere the 3D model template of the object ( for estimation of pose) is built usingIterative Closest Point (ICP) algorithm. And an online phase where the pose ofthe object is estimated by aligning the scene to the model using ICP, providedwith an initial alignment using 3D descriptors (like Fast Point Feature Transform(FPFH)).The approach we develop is to be integrated on two di?erent platforms :1)Humanoid robot `Pyrene' which has Orbec Astra Pro 3D sensor for data acquisition,and 2)Unmanned Aerial Vehicle (UAV) which has Intel Realsense Euclidon it. The datasets of objects (like electric drill, brick, a small cylinder, cake box)are acquired using Microsoft Kinect, Orbec Astra Pro and Intel RealSense Euclidsensors to test the performance of this technique. The objects which are used totest this approach are the ones which are used by robot. This technique is testedin two scenarios, fi?rstly, when the object is on the table and secondly when theobject is held in hand by a person. The range of objects from the sensor is 0.6to 1.6m. This technique could handle occlusions of the object by hand (when wehold the object), as ICP can work even if partial object is visible in the scene.

Author(s):  
Bharat Joshi ◽  
Md Modasshir ◽  
Travis Manderson ◽  
Hunter Damron ◽  
Marios Xanthidis ◽  
...  

Author(s):  
D. Pagliari ◽  
F. Menna ◽  
R. Roncella ◽  
F. Remondino ◽  
L. Pinto

Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.


2015 ◽  
Vol 22 (4) ◽  
pp. 469-478 ◽  
Author(s):  
Andrzej Skalski ◽  
Bartosz Machura

Abstract This paper presents a comprehensive metrological analysis of the Microsoft Kinect motion sensor performed using a proprietary flat marker. The designed marker was used to estimate its position in the external coordinate system associated with the sensor. The study includes calibration of the RGB and IR cameras, parameter identification and image registration. The metrological analysis is based on the data corrected for sensor optical distortions. From the metrological point of view, localization errors are related to the distance of an object from the sensor. Therefore, the rotation angles were determined and an accuracy assessment of the depth maps was performed. The analysis was carried out for the distances from the marker in the range of 0.8−1.65 m. The maximum average error was equal to 23 mm for the distance of 1.6 m.


2010 ◽  
Author(s):  
Øystein Skotheim ◽  
Jens T. Thielemann ◽  
Asbjørn Berge ◽  
Arne Sommerfelt

Sign in / Sign up

Export Citation Format

Share Document