kinect v2
Recently Published Documents


TOTAL DOCUMENTS

231
(FIVE YEARS 89)

H-INDEX

14
(FIVE YEARS 6)

2021 ◽  
Author(s):  
Kunkun Zhao ◽  
Chuan Guo ◽  
Haibo Bian ◽  
Jiyong Yu ◽  
Haiying Wen ◽  
...  

2021 ◽  
Author(s):  
Yu Zhai ◽  
Yanlin Qu ◽  
Peng Xu ◽  
Mengyao Li ◽  
Shaokun Han

2021 ◽  
Vol 13 (22) ◽  
pp. 4583
Author(s):  
Chang Li ◽  
Bingrui Li ◽  
Sisi Zhao

To reduce the 3D systematic error of the RGB-D camera and improve the measurement accuracy, this paper is the first to propose a new 3D compensation method for the systematic error of a Kinect V2 in a 3D calibration field. The processing of the method is as follows. First, the coordinate system between the RGB-D camera and 3D calibration field is transformed using 3D corresponding points. Second, the inliers are obtained using the Bayes SAmple Consensus (BaySAC) algorithm to eliminate gross errors (i.e., outliers). Third, the parameters of the 3D registration model are calculated by the iteration method with variable weights that can further control the error. Fourth, three systematic error compensation models are established and solved by the stepwise regression method. Finally, the optimal model is selected to calibrate the RGB-D camera. The experimental results show the following: (1) the BaySAC algorithm can effectively eliminate gross errors; (2) the iteration method with variable weights could better control slightly larger accidental errors; and (3) the 3D compensation method can compensate 91.19% and 61.58% of the systematic error of the RGB-D camera in the depth and 3D directions, respectively, in the 3D control field, which is superior to the 2D compensation method. The proposed method can control three types of errors (i.e., gross errors, accidental errors and systematic errors) and model errors and can effectively improve the accuracy of depth data.


2021 ◽  
Vol 7 (5) ◽  
pp. 4133-4143
Author(s):  
Hongyu Yu ◽  
Yuefeng Li ◽  
Xingjian Jiang

Motion sensor is a kind of sensor which is commonly used in the field of human-computer interaction. This article uses Microsoft’s Kinect sensor to get in-depth information about gesture recognition in electronic instructional demonstrations. Here, we present the Kinect V2 sensor for use in demonstrating gesture capture and recognition in electronic information instruction. In this study, depth information is used to dynamically capture electronic information teaching demonstrative gestures. In addition, a combined denoising method is also proposed, which can effectively remove the interference and noise compared with the single denoising method. Then, the researchers programmed and moved the dynamic recognition of demonstrative gestures in electronic information instruction. Compared with some traditional denoising methods, the combined denoising method can effectively remove the blur and boundary burr. This study can be applied to the field of human-computer interaction in electronic information teaching to further improve the accuracy of information.


Agronomy ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1780
Author(s):  
Chiranjivi Neupane ◽  
Anand Koirala ◽  
Zhenglin Wang ◽  
Kerry Brian Walsh

Eight depth cameras varying in operational principle (stereoscopy: ZED, ZED2, OAK-D; IR active stereoscopy: Real Sense D435; time of flight (ToF): Real Sense L515, Kinect v2, Blaze 101, Azure Kinect) were compared in context of use for in-orchard fruit localization and sizing. For this application, a specification on bias-corrected root mean square error of 20 mm for a camera-to-fruit distance of 2 m and operation under sunlit field conditions was set. The ToF cameras achieved the measurement specification, with a recommendation for use of Blaze 101 or Azure Kinect made in terms of operation in sunlight and in orchard conditions. For a camera-to-fruit distance of 1.5 m in sunlight, the Azure Kinect measurement achieved an RMSE of 6 mm, a bias of 17 mm, an SD of 2 mm and a fill rate of 100% for depth values of a central 50 × 50 pixels group. To enable inter-study comparisons, it is recommended that future assessments of depth cameras for this application should include estimation of a bias-corrected RMSE and estimation of bias on estimated camera-to-fruit distances at 50 cm intervals to 3 m, under both artificial light and sunlight, with characterization of image distortion and estimation of fill rate.


Author(s):  
Aditi Bhateja ◽  
Adarsh Shrivastav ◽  
Himanshu Chaudhary ◽  
Brejesh Lall ◽  
Prem K Kalra
Keyword(s):  

2021 ◽  
Vol 22 (3) ◽  
pp. 1-10
Author(s):  
Manuel Alejandro Ojeda Misses ◽  
Haydée Silva Ochoa ◽  
Alberto Soria López
Keyword(s):  

Ludibot, robot móvil basado en el dispositivo Kinect v2, busca aprovechar la interacción gestual humano-máquina con fines de aprendizaje de una lengua extranjera. Este proyecto deliberadamente pluridisciplinario asocia la robótica tanto con las ciencias del juego como con la didáctica de lenguas y culturas. El presente artículo pormenoriza los componentes de Ludibot y discute los principales aspectos relativos al control de la interfaz interactiva humano-robot móvil. Posteriormente, expone la estructura de dicha interfaz y describe el primer juego desarrollado, enfocado al aprendizaje lúdico del vocabulario relativo a las partes del cuerpo en francés.


2021 ◽  
Vol 11 (13) ◽  
pp. 6073
Author(s):  
Ines Ayed ◽  
Antoni Jaume-i-Capó ◽  
Pau Martínez-Bueso ◽  
Arnau Mir ◽  
Gabriel Moyà-Alcover

To prevent falls, it is important to measure periodically the balance ability of an individual using reliable clinical tests. As Red Green Blue Depth (RGBD) devices have been increasingly used for balance rehabilitation at home, they may also be used to assess objectively the balance ability and determine the effectiveness of a therapy. For this, we developed a system based on the Microsoft Kinect v2 for measuring the Functional Reach Test (FRT); one of the most used balance clinical tools to predict falls. Two experiments were conducted to compare the FRT measures computed by our system using the Microsoft Kinect v2 with those obtained by the standard method, i.e., manually. In terms of validity, we found a very strong correlation between the two methods (r = 0.97 and r = 0.99 (p < 0.05), for experiments 1 and 2, respectively). However, we needed to correct the measurements using a linear model to fit the data obtained by the Kinect system. Consequently, a linear regression model has been applied and examining the regression assumptions showed that the model works well for the data. Applying the paired t-test to the data after correction indicated that there is no statistically significant difference between the measurements obtained by both methods. As for the reliability of the test, we obtained good to excellent within repeatability of the FRT measurements tracked by Kinect (ICC = 0.86 and ICC = 0.99, for experiments 1 and 2, respectively). These results suggested that the Microsoft Kinect v2 device is reliable and adequate to calculate the standard FRT.


Sign in / Sign up

Export Citation Format

Share Document