scholarly journals Kinect Fusion improvement using depth camera calibration

Author(s):  
D. Pagliari ◽  
F. Menna ◽  
R. Roncella ◽  
F. Remondino ◽  
L. Pinto

Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.

2020 ◽  
Author(s):  
Gopi Krishna Erabati

The technology in current research scenario is marching towards automation forhigher productivity with accurate and precise product development. Vision andRobotics are domains which work to create autonomous systems and are the keytechnology in quest for mass productivity. The automation in an industry canbe achieved by detecting interactive objects and estimating the pose to manipulatethem. Therefore the object localization ( i.e., pose) includes position andorientation of object, has profound ?significance. The application of object poseestimation varies from industry automation to entertainment industry and fromhealth care to surveillance. The objective of pose estimation of objects is verysigni?cant in many cases, like in order for the robots to manipulate the objects,for accurate rendering of Augmented Reality (AR) among others.This thesis tries to solve the issue of object pose estimation using 3D dataof scene acquired from 3D sensors (e.g. Kinect, Orbec Astra Pro among others).The 3D data has an advantage of independence from object texture and invarianceto illumination. The proposal is divided into two phases : An o?ine phasewhere the 3D model template of the object ( for estimation of pose) is built usingIterative Closest Point (ICP) algorithm. And an online phase where the pose ofthe object is estimated by aligning the scene to the model using ICP, providedwith an initial alignment using 3D descriptors (like Fast Point Feature Transform(FPFH)).The approach we develop is to be integrated on two di?erent platforms :1)Humanoid robot `Pyrene' which has Orbec Astra Pro 3D sensor for data acquisition,and 2)Unmanned Aerial Vehicle (UAV) which has Intel Realsense Euclidon it. The datasets of objects (like electric drill, brick, a small cylinder, cake box)are acquired using Microsoft Kinect, Orbec Astra Pro and Intel RealSense Euclidsensors to test the performance of this technique. The objects which are used totest this approach are the ones which are used by robot. This technique is testedin two scenarios, fi?rstly, when the object is on the table and secondly when theobject is held in hand by a person. The range of objects from the sensor is 0.6to 1.6m. This technique could handle occlusions of the object by hand (when wehold the object), as ICP can work even if partial object is visible in the scene.


2018 ◽  
Vol 51 (22) ◽  
pp. 399-404 ◽  
Author(s):  
Alireza Bilesan ◽  
Mohammadhasan Owlia ◽  
Saeed Behzadipour ◽  
Shuhei Ogawa ◽  
Teppei Tsujita ◽  
...  

2013 ◽  
Vol 2013 ◽  
pp. 1-7 ◽  
Author(s):  
Alasdair G. Thin ◽  
Craig Brown ◽  
Paul Meenan

Dance Dance Revolution is a pioneering exergame which has attracted considerable interest for its potential to promote regular exercise and its associated health benefits. The advent of a range of different consumer body motion tracking video game console peripherals raises the question whether their different technological affordances (i.e., variations in the type and number of body limbs that they can track) influence the user experience while playing dance-based exergames both in terms of the level of physical exertion and the nature of the play experience. To investigate these issues a group of subjects performed a total of six comparable dance routines selected from commercial dance-based exergames (two routines from each game) on three different consoles. The subjects’ level of physical exertion was assessed by measuring oxygen consumption and heart rate. They also reported their perceived level of exertion, difficulty, and enjoyment ratings after completing each dance routine. No differences were found in the physiological measures of exertion between the peripherals/consoles. However, there were significant variations in the difficulty and enjoyment ratings between peripherals. The design implications of these results are discussed including the tension between helping to guide and coordinate player movement versus offering greater movement flexibility.


Author(s):  
Seonhong Hwang ◽  
Chung-Ying Tsai ◽  
Alicia M. Koontz

AbstractThe purpose of this study was to test the concurrent validity and test-retest reliability of the Kinect skeleton tracking algorithm for measurement of trunk, shoulder, and elbow joint angle measurement during a wheelchair transfer task. Eight wheelchair users were recruited for this study. Joint positions were recorded simultaneously by the Kinect and Vicon motion capture systems while subjects transferred from their wheelchairs to a level bench. Shoulder, elbow, and trunk angles recorded with the Kinect system followed a similar trajectory as the angles recorded with the Vicon system with correlation coefficients that are larger than 0.71 on both sides (leading arm and trailing arm). The root mean square errors (RMSEs) ranged from 5.18 to 22.46 for the shoulder, elbow, and trunk angles. The 95% limits of agreement (LOA) for the discrepancy between the two systems exceeded the clinical significant level of 5°. For the trunk, shoulder, and elbow angles, the Kinect had very good relative reliability for the measurement of sagittal, frontal and horizontal trunk angles, as indicated by the high intraclass correlation coefficient (ICC) values (>0.90). Small standard error of the measure (SEM) values, indicating good absolute reliability, were observed for all joints except for the leading arm’s shoulder joint. Relatively large minimal detectable changes (MDCs) were observed in all joint angles. The Kinect motion tracking has promising performance levels for some upper limb joints. However, more accurate measurement of the joint angles may be required. Therefore, understanding the limitations in precision and accuracy of Kinect is imperative before utilization of Kinect.


Author(s):  
Zahari Taha ◽  
Mohd Yashim Wong ◽  
Hwa Jen Yap ◽  
Amirul Abdullah ◽  
Wee Kian Yeo

Immersion is one of the most important aspects in ensuring the applicability of Virtual Reality systems to training regimes aiming to improve performance. To ensure that this key aspect is met, the registration of motion between the real world and virtual environment must be made as accurate and as low latency as possible. Thus, an in-house developed Inertial Measurement Unit (IMU) system is developed for use in tracking the movement of the player’s racquet. This IMU tracks 6 DOF motion data and transmits it to the mobile training system for processing. Physically, the custom motion is built into the shape of a racquet grip to give a more natural sensation when swinging the racquet. In addition to that, an adaptive filter framework is also established to cope with different racquet movements automatically, enabling real-time 6 DOF tracking by balancing the jitter and latency. Experiments are performed to compare the efficacy of our approach with other conventional tracking methods such as the using Microsoft Kinect. The results obtained demonstrated noticeable accuracy and lower latency when compared with the aforementioned methods.


2018 ◽  
Vol 218 ◽  
pp. 02014
Author(s):  
Arief Ramadhani ◽  
Achmad Rizal ◽  
Erwin Susanto

Computer vision is one of the fields of research that can be applied in a various subject. One application of computer vision is the hand gesture recognition system. The hand gesture is one of the ways to interact with computers or machines. In this study, hand gesture recognition was used as a password for electronic key systems. The hand gesture recognition in this study utilized the depth sensor in Microsoft Kinect Xbox 360. Depth sensor captured the hand image and segmented using a threshold. By scanning each pixel, we detected the thumb and the number of other fingers that open. The hand gesture recognition result was used as a password to unlock the electronic key. This system could recognize nine types of hand gesture represent number 1, 2, 3, 4, 5, 6, 7, 8, and 9. The average accuracy of the hand gesture recognition system was 97.78% for one single hand sign and 86.5% as password of three hand signs.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1291
Author(s):  
Chin-Hsuan Liu ◽  
Posen Lee ◽  
Yen-Lin Chen ◽  
Chen-Wen Yen ◽  
Chao-Wei Yu

A stable posture requires the coordination of multiple joints of the body. This coordination of the multiple joints of the human body to maintain a stable posture is a subject of research. The number of degrees of freedom (DOFs) of the human motor system is considerably larger than the DOFs required for posture balance. The manner of managing this redundancy by the central nervous system remains unclear. To understand this phenomenon, in this study, three local inter-joint coordination pattern (IJCP) features were introduced to characterize the strength, changing velocity, and complexity of the inter-joint couplings by computing the correlation coefficients between joint velocity signal pairs. In addition, for quantifying the complexity of IJCPs from a global perspective, another set of IJCP features was introduced by performing principal component analysis on all joint velocity signals. A Microsoft Kinect depth sensor was used to acquire the motion of 15 joints of the body. The efficacy of the proposed features was tested using the captured motions of two age groups (18–24 and 65–73 years) when standing still. With regard to the redundant DOFs of the joints of the body, the experimental results suggested that an inter-joint coordination strategy intermediate to that of the two extreme coordination modes of total joint dependence and independence is used by the body. In addition, comparative statistical results of the proposed features proved that aging increases the coupling strength, decreases the changing velocity, and reduces the complexity of the IJCPs. These results also suggested that with aging, the balance strategy tends to be more joint dependent. Because of the simplicity of the proposed features and the affordability of the easy-to-use Kinect depth sensor, such an assembly can be used to collect large amounts of data to explore the potential of the proposed features in assessing the performance of the human balance control system.


Sign in / Sign up

Export Citation Format

Share Document