ms kinect
Recently Published Documents


TOTAL DOCUMENTS

36
(FIVE YEARS 8)

H-INDEX

8
(FIVE YEARS 1)

2022 ◽  
Vol 12 ◽  
Author(s):  
Aditya Viswakumar ◽  
Venkateswaran Rajagopalan ◽  
Tathagata Ray ◽  
Pranitha Gottipati ◽  
Chandu Parimi

Gait analysis is used in many fields such as Medical Diagnostics, Osteopathic medicine, Comparative and Sports-related biomechanics, etc. The most commonly used system for capturing gait is the advanced video camera-based passive marker system such as VICON. However, such systems are expensive, and reflective markers on subjects can be intrusive and time-consuming. Moreover, the setup of markers for certain rehabilitation patients, such as people with stroke or spinal cord injuries, could be difficult. Recently, some markerless systems were introduced to overcome the challenges of marker-based systems. However, current markerless systems have low accuracy and pose other challenges in gait analysis with people in long clothing, hiding the gait kinematics. The present work attempts to make an affordable, easy-to-use, accurate gait analysis system while addressing all the mentioned issues. The system in this study uses images from a video taken with a smartphone camera (800 × 600 pixels at an average rate of 30 frames per second). The system uses OpenPose, a 2D real-time multi-person keypoint detection technique. The system learns to associate body parts with individuals in the image using Convolutional Neural Networks (CNNs). This bottom-up system achieves high accuracy and real-time performance, regardless of the number of people in the image. The proposed system is called the “OpenPose based Markerless Gait Analysis System” (OMGait). Ankle, knee, and hip flexion/extension angle values were measured using OMGait in 16 healthy volunteers under different lighting and clothing conditions. The measured kinematic values were compared with a standard video camera based normative dataset and data from a markerless MS Kinect system. The mean absolute error value of the joint angles from the proposed system was less than 90 for different lighting conditions and less than 110 for different clothing conditions compared to the normative dataset. The proposed system is adequate in measuring the kinematic values of the ankle, knee, and hip. It also performs better than the markerless systems like MS Kinect that fail to measure the kinematics of ankle, knee, and hip joints under dark and bright light conditions and in subjects with long robe clothing.


Author(s):  
Angelo Christian F. Austria ◽  
Ma. Lorena M. Madolid ◽  
Sarah May D. Mejia ◽  
Ricci Angela Valenzuela ◽  
Engr. Roselito E. Tolentino

Author(s):  
Majeed Soufian ◽  
Samia Nefti-Mezian ◽  
Jonathan Drake

Abstract The majority of older people wish to live independently at home as long as possible despite having a range of age-related conditions including cognitive impairment. To facilitate this, there has been an extensive focus on exploring the capability of new technologies with limited success. This paper investigates whether MS Kinect (a motion-based sensing 3-D scanner device) within the MiiHome (My Intelligent Home) project in conjunction with other sensory data, machine learning and big data techniques can assist in the diagnosis and prognosis of cognitive impairment and hence prolong independent living. A pool of Kinect devices and various sensors powered by minicomputers providing internet connectivity are being installed in up to 200 homes. This enables continuous remote monitoring of elderly residents living alone. Passive and off-the-shelf sensor technologies were chosen to implement data acquisition specifically from sources that are part of the fabric of the homes, so that no extra effort is required from the participants. Various constraints including environmental, geometrical and big data were identified and appropriately dealt with. A visualization tool (MAGID) was developed for validation and verification of numerous behavioural activities. Then, a subset of data, from twelve pensioners aged over 65 with age-related cognitive decline and frailty, were collected over a period of 6 months. These data were subjected to several machine learning algorithms (multilayer perceptron neural network, neuro-fuzzy and deep learning) for classification and to extract routine behavioural patterns. These patterns were then analysed further to ascertain any health-related information and their attributes. For the first time, important routine behaviour related to Activities of Daily Living (ADL) of elderly people with cognitive and physical decline has been learnt by machine learning techniques from selected sample data obtained by MS Kinect. Medically important behaviour, e.g. eating, walking, sitting, was best learnt by deep learning with accuracy of 99.30% during training stage and average error rate of 1.83% with maximum of 12.98% during the implementation phase. Observations obtained from the application of the above learnt behaviours are presented as trends over a period of time. These trends, supplemented by other sensory signals, have provided a clearer picture of physical (in)activities (including falls) of the pensioners. The calculated behavioural attributes related to key indicators of health events can be used to model the trajectory of health status related to cognitive decline in a home setting. These results, based on a small number of elderly residents over a short period of time, imply that within the results obtained from the MiiHome project, it is possible to find indicators of cognitive decline. However, further studies are needed for full clinical validation of these indications in conjunction with assessment of cognitive decline of the participants.


2020 ◽  
Vol 4 (3) ◽  
pp. 61
Author(s):  
Panagiotis Vogiatzidakis ◽  
Panayiotis Koutsabasis

Touchless, mid-air gesture-based interactions with remote devices have been investigated as alternative or complementary to interactions based on remote controls and smartphones. Related studies focus on user elicitation of a gesture vocabulary for one or a few home devices and explore recommendations of respective gesture vocabularies without validating them by empirical testing with interactive prototypes. We have developed an interactive prototype based on spatial Augmented Reality (AR) of seven home devices. Each device responds to touchless gestures (identified from a previous elicitation study) via the MS Kinect sensor. Nineteen users participated in a two-phase test (with and without help provided by a virtual assistant) according to a scenario that required from each user to apply 41 gestural commands (19 unique). We report on main usability indicators: task success, task time, errors (false negative/positives), memorability, perceived usability, and user experience. The main conclusion is that mid-air interaction with multiple home devices is feasible, fairly easy to learn and apply, and enjoyable. The contributions of this paper are (a) validation of a previously elicited gesture set; (b) development of a spatial AR prototype for testing of mid-air gestures, and (c) extensive assessment of gestures and evidence in favor of mid-air interaction in smart environments.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1360 ◽  
Author(s):  
Martin Schätz ◽  
Aleš Procházka ◽  
Jiří Kuchyňka ◽  
Oldřich Vyšata

This paper is devoted to proving two goals, to show that various depth sensors can be used to record breathing rate with the same accuracy as contact sensors used in polysomnography (PSG), in addition to proving that breathing signals from depth sensors have the same sensitivity to breathing changes as in PSG records. The breathing signal from depth sensors can be used for classification of sleep apnea events with the same success rate as with PSG data. The recent development of computational technologies has led to a big leap in the usability of range imaging sensors. New depth sensors are smaller, have a higher sampling rate, with better resolution, and have bigger precision. They are widely used for computer vision in robotics, but they can be used as non-contact and non-invasive systems for monitoring breathing and its features. The breathing rate can be easily represented as the frequency of a recorded signal. All tested depth sensors (MS Kinect v2, RealSense SR300, R200, D415 and D435) are capable of recording depth data with enough precision in depth sensing and sampling frequency in time (20–35 frames per second (FPS)) to capture breathing rate. The spectral analysis shows a breathing rate between 0.2 Hz and 0.33 Hz, which corresponds to the breathing rate of an adult person during sleep. To test the quality of breathing signal processed by the proposed workflow, a neural network classifier (simple competitive NN) was trained on a set of 57 whole night polysomnographic records with a classification of sleep apneas by a sleep specialist. The resulting classifier can mark all apnea events with 100% accuracy when compared to the classification of a sleep specialist, which is useful to estimate the number of events per hour. When compared to the classification of polysomnographic breathing signal segments by a sleep specialist, which is used for calculating length of the event, the classifier has an F 1 score of 92.2% Accuracy of 96.8% (sensitivity 89.1% and specificity 98.8%). The classifier also proves successful when tested on breathing signals from MS Kinect v2 and RealSense R200 with simulated sleep apnea events. The whole process can be fully automatic after implementation of automatic chest area segmentation of depth data.


2019 ◽  
Vol 12 (9) ◽  
pp. 54
Author(s):  
Raúl Lozada-Yánez ◽  
Nora La-Serna-Palomino ◽  
Fernando Molina-Granja

By its nature, the learning of certain complex contents has always been a focus of attention and a challenge in the study of mathematics. This fact acquires greater importance if it is about the learning of children, because the psycho-cognitive skills of this type of user, especially when they attend the first levels of Basic General Education are not yet mature. As a result, children are unable to assimilate correctly and easily certain content of an abstract nature during the early stages of mathematics learning. This study presents the results of the application of a computer system called “Kinect based Augmented Reality Math Learning System - KARMLS”, whose design and development uses the Augmented Reality technology and the motion sensor implemented in MS-Kinect camera. The developed application covers elementary math topics corresponding to the Basic General Education curriculum of the Republic of Ecuador. The study used an experimental quantitative approach, involving 29 third-grade children (13 girls and 16 boys), who attend to 2 Basic General Education schools in Riobamba city, Ecuador. The results that allowed to evaluate the prototype proposed in the study were obtained by means of a pretest and a posttest, which were contrasted with the students’ t-test for paired samples. Through the analysis of data obtained and the discussion, it is concluded that the applied computer system had a positive effect for the learning when used as a supplementary tool in the classroom and that it was more effective in children who previously had low performance than with those of high performance. Also, the children were motivated and with positive attitudes regarding the use of the analyzed software.


2018 ◽  
Vol 9 (2) ◽  
pp. 1
Author(s):  
Armando Martinez ◽  
Antonio Iyda Paganelli ◽  
Alberto Raposo

Immersive virtual reality (VR) has been used in different fields such as training, educational programs, entertainment, psychological treatments, and rehabilitation. Despite its broad utilization, some issues limit its application such as the loss of balance. Balance is disturbed because visual stimuli received from the virtual scenario are not in harmony with perceived stimuli by the proprioception and vestibular systems that remain in contact with the real environment. With the increasing popularity and accessibility of high-quality VR systems, concerns have been raised about the propensity of VR to induce balance loss. Balance is essential for safe use of VR experience and its loss can result in severe injury. In this work, we present a methodology and the necessary tools to quantify the influence of VR on the user’s balance and assess risk of falls during VR interaction. By means of an experiment making use of an Oculus Rift and a MS Kinect Sensor, we observe, quantify and compare the effect of VR scenes with different levels of danger on the balance of users, as well as the effect of visual and auditory warnings of balance loss. Results suggest that auditory signs were not effective in warning users about risk of fall, and that the order which the scenes are presented to users affects their behavior. Users who were first presented to a more challenging scene proceeded more carefully and most of the time carried this behavior to the other less challenging scenes.


Sign in / Sign up

Export Citation Format

Share Document