Validation of feasibility of two depth sensor-based Microsoft Kinect cameras for human abduction-adduction motion analysis

2016 ◽  
Vol 17 (9) ◽  
pp. 1209-1214 ◽  
Author(s):  
Choong Yeon Kim ◽  
Jae Soo Hong ◽  
Keyoung Jin Chun
Author(s):  
D. Pagliari ◽  
F. Menna ◽  
R. Roncella ◽  
F. Remondino ◽  
L. Pinto

Scene's 3D modelling, gesture recognition and motion tracking are fields in rapid and continuous development which have caused growing demand on interactivity in video-game and e-entertainment market. Starting from the idea of creating a sensor that allows users to play without having to hold any remote controller, the Microsoft Kinect device was created. The Kinect has always attract researchers in different fields, from robotics to Computer Vision (CV) and biomedical engineering as well as third-party communities that have released several Software Development Kit (SDK) versions for Kinect in order to use it not only as a game device but as measurement system. Microsoft Kinect Fusion control libraries (firstly released in March 2013) allow using the device as a 3D scanning and produce meshed polygonal of a static scene just moving the Kinect around. A drawback of this sensor is the geometric quality of the delivered data and the low repeatability. For this reason the authors carried out some investigation in order to evaluate the accuracy and repeatability of the depth measured delivered by the Kinect. The paper will present a throughout calibration analysis of the Kinect imaging sensor, with the aim of establishing the accuracy and precision of the delivered information: a straightforward calibration of the depth sensor in presented and then the 3D data are correct accordingly. Integrating the depth correction algorithm and correcting the IR camera interior and exterior orientation parameters, the Fusion Libraries are corrected and a new reconstruction software is created to produce more accurate models.


2018 ◽  
Vol 218 ◽  
pp. 02014
Author(s):  
Arief Ramadhani ◽  
Achmad Rizal ◽  
Erwin Susanto

Computer vision is one of the fields of research that can be applied in a various subject. One application of computer vision is the hand gesture recognition system. The hand gesture is one of the ways to interact with computers or machines. In this study, hand gesture recognition was used as a password for electronic key systems. The hand gesture recognition in this study utilized the depth sensor in Microsoft Kinect Xbox 360. Depth sensor captured the hand image and segmented using a threshold. By scanning each pixel, we detected the thumb and the number of other fingers that open. The hand gesture recognition result was used as a password to unlock the electronic key. This system could recognize nine types of hand gesture represent number 1, 2, 3, 4, 5, 6, 7, 8, and 9. The average accuracy of the hand gesture recognition system was 97.78% for one single hand sign and 86.5% as password of three hand signs.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1291
Author(s):  
Chin-Hsuan Liu ◽  
Posen Lee ◽  
Yen-Lin Chen ◽  
Chen-Wen Yen ◽  
Chao-Wei Yu

A stable posture requires the coordination of multiple joints of the body. This coordination of the multiple joints of the human body to maintain a stable posture is a subject of research. The number of degrees of freedom (DOFs) of the human motor system is considerably larger than the DOFs required for posture balance. The manner of managing this redundancy by the central nervous system remains unclear. To understand this phenomenon, in this study, three local inter-joint coordination pattern (IJCP) features were introduced to characterize the strength, changing velocity, and complexity of the inter-joint couplings by computing the correlation coefficients between joint velocity signal pairs. In addition, for quantifying the complexity of IJCPs from a global perspective, another set of IJCP features was introduced by performing principal component analysis on all joint velocity signals. A Microsoft Kinect depth sensor was used to acquire the motion of 15 joints of the body. The efficacy of the proposed features was tested using the captured motions of two age groups (18–24 and 65–73 years) when standing still. With regard to the redundant DOFs of the joints of the body, the experimental results suggested that an inter-joint coordination strategy intermediate to that of the two extreme coordination modes of total joint dependence and independence is used by the body. In addition, comparative statistical results of the proposed features proved that aging increases the coupling strength, decreases the changing velocity, and reduces the complexity of the IJCPs. These results also suggested that with aging, the balance strategy tends to be more joint dependent. Because of the simplicity of the proposed features and the affordability of the easy-to-use Kinect depth sensor, such an assembly can be used to collect large amounts of data to explore the potential of the proposed features in assessing the performance of the human balance control system.


2017 ◽  
Vol 9 (6) ◽  
pp. 537-544 ◽  
Author(s):  
Aaron D. Gray ◽  
Brad W. Willis ◽  
Marjorie Skubic ◽  
Zhiyu Huo ◽  
Swithin Razu ◽  
...  

Background: Noncontact anterior cruciate ligament (ACL) injury in adolescent female athletes is an increasing problem. The knee-ankle separation ratio (KASR), calculated at initial contact (IC) and peak flexion (PF) during the drop vertical jump (DVJ), is a measure of dynamic knee valgus. The Microsoft Kinect V2 has shown promise as a reliable and valid marker-less motion capture device. Hypothesis: The Kinect V2 will demonstrate good to excellent correlation between KASR results at IC and PF during the DVJ, as compared with a “gold standard” Vicon motion analysis system. Study Design: Descriptive laboratory study. Level of Evidence: Level 2. Methods: Thirty-eight healthy volunteer subjects (20 male, 18 female) performed 5 DVJ trials, simultaneously measured by a Vicon MX-T40S system, 2 AMTI force platforms, and a Kinect V2 with customized software. A total of 190 jumps were completed. The KASR was calculated at IC and PF during the DVJ. The intraclass correlation coefficient (ICC) assessed the degree of KASR agreement between the Kinect and Vicon systems. Results: The ICCs of the Kinect V2 and Vicon KASR at IC and PF were 0.84 and 0.95, respectively, showing excellent agreement between the 2 measures. The Kinect V2 successfully identified the KASR at PF and IC frames in 182 of 190 trials, demonstrating 95.8% reliability. Conclusion: The Kinect V2 demonstrated excellent ICC of the KASR at IC and PF during the DVJ when compared with the Vicon system. A customized Kinect V2 software program demonstrated good reliability in identifying the KASR at IC and PF during the DVJ. Clinical Relevance: Reliable, valid, inexpensive, and efficient screening tools may improve the accessibility of motion analysis assessment of adolescent female athletes.


2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Tao Hu ◽  
Xinyan Zhu ◽  
Wei Guo ◽  
Kehua Su

This paper proposes a novel approach to decompose two-person interaction into a Positive Action and a Negative Action for more efficient behavior recognition. A Positive Action plays the decisive role in a two-person exchange. Thus, interaction recognition can be simplified to Positive Action-based recognition, focusing on an action representation of just one person. Recently, a new depth sensor has become widely available, the Microsoft Kinect camera, which provides RGB-D data with 3D spatial information for quantitative analysis. However, there are few publicly accessible test datasets using this camera, to assess two-person interaction recognition approaches. Therefore, we created a new dataset with six types of complex human interactions (i.e., named K3HI), including kicking, pointing, punching, pushing, exchanging an object, and shaking hands. Three types of features were extracted for each Positive Action: joint, plane, and velocity features. We used continuous Hidden Markov Models (HMMs) to evaluate the Positive Action-based interaction recognition method and the traditional two-person interaction recognition approach with our test dataset. Experimental results showed that the proposed recognition technique is more accurate than the traditional method, shortens the sample training time, and therefore achieves comprehensive superiority.


Author(s):  
Munir Oudah ◽  
Ali Al-Naji ◽  
Javaan Chahl

Hand gestures may play an important role in medical applications for health care of elderly people, where providing a natural interaction for different requests can be executed by making specific gestures. In this study we explored three different scenarios using a Microsoft Kinect V2 depth sensor then evaluated the effectiveness of the outcomes. The first scenario utilized the default system embedded in the Kinect V2 sensor, which depth metadata gives 11 parameters related to the tracked body with five gestures for each hand. The second scenario used joint tracking provided by Kinect depth metadata and depth threshold together to enhance hand segmentation and efficiently recognize the number of fingers extended. The third scenario used a simple convolutional neural network with joint tracking by depth metadata to recognize five categories of gestures. In this study, deaf-mute elderly people execute five different hand gestures to indicate a specific request, such as needing water, meal, toilet, help and medicine. Then, the requests were sent to the care provider’s smartphone because elderly people could not execute any activity independently. The system transferred these requests as a message through the global system for mobile communication (GSM) using a microcontroller.


2017 ◽  
Vol 13 (12) ◽  
pp. 162
Author(s):  
Robinson Jiménez Moreno ◽  
Oscar Aviles ◽  
Ruben Darío Hernández Beleño

This article presents a supervised control position system, based on image processing and oriented to the cooperative work between two humanoid robots that work autonomously. The first robot picks up an object, carry it to the second robot and after that the same second robot places it in an endpoint, this is achieved through doing movements in straight line trajectories and turns of 180 degrees. Using for this the Microsoft Kinect , finding for each robot and the reference object its exact spatial position, through the color space conversion and filtering, derived from the information of the RGB camera that counts and obtains this result using the information transmitted from the depth sensor, obtaining the final location of each. Through programming in C #, and the developed algorithms that allow to command each robot in order to work together for transport the reference object, from an initial point, delivering this object from one robot to the other and depositing it in an endpoint. This experiment was tested performed the same trajectory, under uniform light conditions, achieving each time the successful delivering of the object


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Sasadara B. Adikari ◽  
Naleen C. Ganegoda ◽  
Ravinda G. N. Meegama ◽  
Indika L. Wanniarachchi

A busy lifestyle led people to buy readymade clothes from retail stores with or without fit-on, expecting a perfect match. The existing online cloth shopping systems are capable of providing only 2D images of the clothes, which does not lead to a perfect match for the individual user. To overcome this problem, the apparel industry conducts many studies to reduce the time gap between cloth selection and final purchase by introducing “virtual dressing rooms.” This paper discusses the design and implementation of augmented reality “virtual dressing room” for real-time simulation of 3D clothes. The system is developed using a single Microsoft Kinect V2 sensor as the depth sensor, to obtain user body parameter measurements, including 3D measurements such as the circumferences of chest, waist, hip, thigh, and knee to develop a unique model for each user. The size category of the clothes is chosen based on the measurements of each customer. The Unity3D game engine was incorporated for overlaying 3D clothes virtually on the user in real time. The system is also equipped with gender identification and gesture controllers to select the cloth. The developed application successfully augmented the selected dress model with physics motions according to the physical movements made by the user, which provides a realistic fitting experience. The performance evaluation reveals that a single depth sensor can be applied in the real-time simulation of 3D cloth with less than 10% of the average measurement error.


2013 ◽  
Vol 284-287 ◽  
pp. 1996-2000 ◽  
Author(s):  
Hai Trieu Pham ◽  
Jung Ja Kim ◽  
Yong Gwan Won

Many motion analysis systems which have been introduced in the past few years are currently receiving interests from researchers and developers due to their usefulness and wide application capability in the future. However, many of those systems meet with difficulties for the real applications because of high cost for the implementation and less accuracy. This paper introduces a new 3D motion analysis system which can be implemented at a lower cost and acceptable accuracy for various applications. The key component of our new system is the use of the MSK (Microsoft Kinect) sensor system which is equipped with both visual camera and infrared camera. It can provide the color image, the 3D depth image and the 3D skeleton data without wearing any marker device on the human body while it can provide acceptable accuracy in 3D motion trace at low cost. Our system can be exploited for a base framework for various 3D motion-based applications such as physical rehabilitation support, sport motion analysis and biomechanical applications.


Computers ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 5
Author(s):  
Munir Oudah ◽  
Ali Al-Naji ◽  
Javaan Chahl

Technological advances have allowed hand gestures to become an important research field especially in applications such as health care and assisting applications for elderly people, providing a natural interaction with the assisting system through a camera by making specific gestures. In this study, we proposed three different scenarios using a Microsoft Kinect V2 depth sensor then evaluated the effectiveness of the outcomes. The first scenario used joint tracking combined with a depth threshold to enhance hand segmentation and efficiently recognise the number of fingers extended. The second scenario utilised the metadata parameters provided by the Kinect V2 depth sensor, which provided 11 parameters related to the tracked body and gave information about three gestures for each hand. The third scenario used a simple convolutional neural network with joint tracking by depth metadata to recognise and classify five hand gesture categories. In this study, deaf-mute elderly people performed five different hand gestures, each related to a specific request, such as needing water, meal, toilet, help and medicine. Next, the request was sent via the global system for mobile communication (GSM) as a text message to the care provider’s smartphone because the elderly subjects could not execute any activity independently.


Sign in / Sign up

Export Citation Format

Share Document