Structure-based object representation and classification in mobile robotics through a Microsoft Kinect

2013 ◽  
Vol 61 (12) ◽  
pp. 1665-1679 ◽  
Author(s):  
Antonio Sgorbissa ◽  
Damiano Verda
2018 ◽  
Vol 10 (5) ◽  
pp. 140-159
Author(s):  
B. Hisham ◽  
A. Hamouda

2020 ◽  
Author(s):  
Gopi Krishna Erabati

The technology in current research scenario is marching towards automation forhigher productivity with accurate and precise product development. Vision andRobotics are domains which work to create autonomous systems and are the keytechnology in quest for mass productivity. The automation in an industry canbe achieved by detecting interactive objects and estimating the pose to manipulatethem. Therefore the object localization ( i.e., pose) includes position andorientation of object, has profound ?significance. The application of object poseestimation varies from industry automation to entertainment industry and fromhealth care to surveillance. The objective of pose estimation of objects is verysigni?cant in many cases, like in order for the robots to manipulate the objects,for accurate rendering of Augmented Reality (AR) among others.This thesis tries to solve the issue of object pose estimation using 3D dataof scene acquired from 3D sensors (e.g. Kinect, Orbec Astra Pro among others).The 3D data has an advantage of independence from object texture and invarianceto illumination. The proposal is divided into two phases : An o?ine phasewhere the 3D model template of the object ( for estimation of pose) is built usingIterative Closest Point (ICP) algorithm. And an online phase where the pose ofthe object is estimated by aligning the scene to the model using ICP, providedwith an initial alignment using 3D descriptors (like Fast Point Feature Transform(FPFH)).The approach we develop is to be integrated on two di?erent platforms :1)Humanoid robot `Pyrene' which has Orbec Astra Pro 3D sensor for data acquisition,and 2)Unmanned Aerial Vehicle (UAV) which has Intel Realsense Euclidon it. The datasets of objects (like electric drill, brick, a small cylinder, cake box)are acquired using Microsoft Kinect, Orbec Astra Pro and Intel RealSense Euclidsensors to test the performance of this technique. The objects which are used totest this approach are the ones which are used by robot. This technique is testedin two scenarios, fi?rstly, when the object is on the table and secondly when theobject is held in hand by a person. The range of objects from the sensor is 0.6to 1.6m. This technique could handle occlusions of the object by hand (when wehold the object), as ICP can work even if partial object is visible in the scene.


2005 ◽  
Author(s):  
Huan Li ◽  
John Sweeney ◽  
Krithi Ramamritham ◽  
Roderic Grupen ◽  
Prashant Shenoy
Keyword(s):  

2020 ◽  
Vol 41 (S1) ◽  
pp. s12-s12
Author(s):  
D. M. Hasibul Hasan ◽  
Philip Polgreen ◽  
Alberto Segre ◽  
Jacob Simmering ◽  
Sriram Pemmaraju

Background: Simulations based on models of healthcare worker (HCW) mobility and contact patterns with patients provide a key tool for understanding spread of healthcare-acquired infections (HAIs). However, simulations suffer from lack of accurate model parameters. This research uses Microsoft Kinect cameras placed in a patient room in the medical intensive care unit (MICU) at the University of Iowa Hospitals and Clinics (UIHC) to obtain reliable distributions of HCW visit length and time spent by HCWs near a patient. These data can inform modeling efforts for understanding HAI spread. Methods: Three Kinect cameras (left, right, and door cameras) were placed in a patient room to track the human body (ie, left/right hands and head) at 30 frames per second. The results reported here are based on 7 randomly selected days from a total of 308 observation days. Each tracked body may have multiple raw segments over the 2 camera regions, which we “stitch” up by matching features (eg, direction, velocity, etc), to obtain complete trajectories. Due to camera noise, in a substantial fraction of the frames bodies display unnatural characteristics including frequent and rapid directional and velocity change. We use unsupervised learning techniques to identify such “ghost” frames and we remove from our analysis bodies that have 20% or more “ghost” frames. Results: The heat map of hand positions (Fig. 1) shows that high-frequency locations are clustered around the bed and more to the patient’s right in accordance with the general medical practice of performing patient exams from their right. HCW visit frequency per hour (mean, 6.952; SD, 2.855) has 2 peaks, 1 during morning shift and 1 during the afternoon shift, with a distinct decrease after midnight. Figure 2 shows visit length (in minutes) distribution (mean, 1.570; SD, 2.679) being dominated by “check in visits” of <30 seconds. HCWs do not spend much time at touching distance from patients during short-length visits, and the fraction of time spent near the patient’s bed seems to increase with visit length up to a point. Conclusions: Using fine-grained data, this research extracts distributions of these critical parameters of HCW–patient interactions: (1) HCW visit length, (2) HCW visit frequency as a function of time of day, and (3) time spent by HCW within touching distance of patient as a function of visit length. To the best of our knowledge, we provide the first reliable estimates of these parameters.Funding: NoneDisclosures: None


Sensors ◽  
2021 ◽  
Vol 21 (10) ◽  
pp. 3327
Author(s):  
Vicente Román ◽  
Luis Payá ◽  
Adrián Peidró ◽  
Mónica Ballesta ◽  
Oscar Reinoso

Over the last few years, mobile robotics has experienced a great development thanks to the wide variety of problems that can be solved with this technology. An autonomous mobile robot must be able to operate in a priori unknown environments, planning its trajectory and navigating to the required target points. With this aim, it is crucial solving the mapping and localization problems with accuracy and acceptable computational cost. The use of omnidirectional vision systems has emerged as a robust choice thanks to the big quantity of information they can extract from the environment. The images must be processed to obtain relevant information that permits solving robustly the mapping and localization problems. The classical frameworks to address this problem are based on the extraction, description and tracking of local features or landmarks. However, more recently, a new family of methods has emerged as a robust alternative in mobile robotics. It consists of describing each image as a whole, what leads to conceptually simpler algorithms. While methods based on local features have been extensively studied and compared in the literature, those based on global appearance still merit a deep study to uncover their performance. In this work, a comparative evaluation of six global-appearance description techniques in localization tasks is carried out, both in terms of accuracy and computational cost. Some sets of images captured in a real environment are used with this aim, including some typical phenomena such as changes in lighting conditions, visual aliasing, partial occlusions and noise.


Sign in / Sign up

Export Citation Format

Share Document