scholarly journals Gaze and Movement Assessment (GaMA): Inter-site validation of a visuomotor upper limb functional protocol

2019 ◽  
Author(s):  
Heather E. Williams ◽  
Craig S. Chapman ◽  
Patrick M. Pilarski ◽  
Albert H. Vette ◽  
Jacqueline S. Hebert

AbstractBackgroundSuccessful hand-object interactions require precise hand-eye coordination with continual movement adjustments. Quantitative measurement of this visuomotor behaviour could provide valuable insight into upper limb impairments. The Gaze and Movement Assessment (GaMA) was developed to provide protocols for simultaneous motion capture and eye tracking during the administration of two functional tasks, along with data analysis methods to generate standard measures of visuomotor behaviour. The objective of this study was to investigate the reproducibility of the GaMA protocol across two independent groups of non-disabled participants, with different raters using different motion capture and eye tracking technology.MethodsTwenty non-disabled adults performed the Pasta Box Task and the Cup Transfer Task. Upper body and eye movements were recorded using motion capture and eye tracking, respectively. Measures of hand movement, angular joint kinematics, and eye gaze were compared to those from a different sample of twenty non-disabled adults who had previously performed the same protocol with different technology, rater and site.ResultsParticipants took longer to perform the tasks versus those from the earlier study, although the relative time of each movement phase was similar. Measures that were dissimilar between the groups included hand distances travelled, hand trajectories, number of movement units, eye latencies, and peak angular velocities. Similarities included all hand velocity and grip aperture measures, eye fixations, and most peak joint angle and range of motion measures.DiscussionThe reproducibility of GaMA was confirmed by this study, despite a few differences introduced by learning effects, task demonstration variation, and limitations of the kinematic model. The findings from this study provide confidence in the reliability of normative results obtained by GaMA, indicating it accurately quantifies the typical behaviours of a non-disabled population. This work advances the consideration for use of GaMA in populations with upper limb sensorimotor impairment.


Compensatory movement after stroke occurred when inter-joint coordination between arm and forearm for the purpose of arm transport becomes limited due to the weaknesses of the upper limb after stroke. This limitation causes an inefficiency of hand movement to perform the activity of daily living (ADL). Previous work has shown the possibility of using Kinect to assess torso compensation in typical assessment of upper limb movement in a stroke-simulated setting using a Torso Principal Component Analysis (PCA) Model. This research extends the study into evaluating Torso PCA Model in terms of orientation angles of the torso in three dimensional when performing planar activities namely circle tracing and point-topoint tracing. The orientation angles were compared to the outcome of the measurement from a standard motion capture system and Kinect’s intrinsic chest orientation angles. Based on the statistical results, Torso PCA model is concurrently valid with the clinically accepted measures of torso orientation and can be used further to analyze torso compensation in stroke patients.



Healthcare ◽  
2021 ◽  
Vol 9 (8) ◽  
pp. 1076
Author(s):  
Laisi Cai ◽  
Dongwei Liu ◽  
Ye Ma

Low-cost, portable, and easy-to-use Kinect-based systems achieved great popularity in out-of-the-lab motion analysis. The placement of a Kinect sensor significantly influences the accuracy in measuring kinematic parameters for dynamics tasks. We conducted an experiment to investigate the impact of sensor placement on the accuracy of upper limb kinematics during a typical upper limb functional task, the drinking task. Using a 3D motion capture system as the golden standard, we tested twenty-one Kinect positions with three different distances and seven orientations. Upper limb joint angles, including shoulder flexion/extension, shoulder adduction/abduction, shoulder internal/external rotation, and elbow flexion/extension angles, are calculated via our developed Kinect kinematic model and the UWA kinematic model for both the Kinect-based system and the 3D motion capture system. We extracted the angles at the point of the target achieved (PTA). The mean-absolute-error (MEA) with the standard represents the Kinect-based system’s performance. We conducted a two-way repeated measure ANOVA to explore the impacts of distance and orientation on the MEAs for all upper limb angles. There is a significant main effect for orientation. The main effects for distance and the interaction effects do not reach statistical significance. The post hoc test using LSD test for orientation shows that the effect of orientation is joint-dependent and plane-dependent. For a complex task (e.g., drinking), which involves body occlusions, placing a Kinect sensor right in front of a subject is not a good choice. We suggest that place a Kinect sensor at the contralateral side of a subject with the orientation around 30∘ to 45∘ for upper limb functional tasks. For all kinds of dynamic tasks, we put forward the following recommendations for the placement of a Kinect sensor. First, set an optimal sensor position for capture, making sure that all investigated joints are visible during the whole task. Second, sensor placement should avoid body occlusion at the maximum extension. Third, if an optimal location cannot be achieved in an out-of-the-lab environment, researchers could put the Kinect sensor at an optimal orientation by trading off the factor of distance. Last, for those need to assess functions of both limbs, the users can relocate the sensor and re-evaluate the functions of the other side once they finish evaluating functions of one side of a subject.



2015 ◽  
Vol 772 ◽  
pp. 329-333
Author(s):  
Ali Soroush ◽  
Farzam Farahmand

The aim of this study was to determine the workspace of surgeon's body for designing more efficient surgical robots in the operation rooms. Five wearable inertial sensors were placed near the wrist and elbow joints and also on the thorax of surgeons to track the orientation of upper limb. Assuming that the lengths of five segments of an upper limb were known, measurements of the inertial sensors were used to determine the position of the wrist and elbow joints via an established kinematic model. subsequently, to assess the workspace of surgeon upper body, raw data were collected in the arthroscopy and laparoscopy operations. Experimental results demonstrated that the workspaces of surgeon's joints are limited and predefined. The results can be used for designing surgical robots and surgeon body supports.





2014 ◽  
Vol 607 ◽  
pp. 764-767
Author(s):  
Qiang Wang ◽  
Run Ji

This paper provided a new method of controlling the rehabilitative training system for the patients with upper limb movement disorder. On the basis of the computer through the patient's healthy limb motion gesture motion parameters, analyzed the kinematic model of human upper limb joints, obtained the kinematic parameters of upper body. Study the establishment of different categories of patients with upper limb virtual computer model system for the detection of relevant parameters. And according to the requirements of upper limb rehabilitation training for patients with upper limb rehabilitative training system, researched the dynamics model of human upper limb, and indicated a method which may provide scientific and effective training methods to recover function rehabilitation for patients.





Author(s):  
Pyeong-Gook Jung ◽  
Sehoon Oh ◽  
Gukchan Lim ◽  
Kyoungchul Kong

Motion capture systems play an important role in health-care and sport-training systems. In particular, there exists a great demand on a mobile motion capture system that enables people to monitor their health condition and to practice sport postures anywhere at any time. The motion capture systems with infrared or vision cameras, however, require a special setting, which hinders their application to a mobile system. In this paper, a mobile three-dimensional motion capture system is developed based on inertial sensors and smart shoes. Sensor signals are measured and processed by a mobile computer; thus, the proposed system enables the analysis and diagnosis of postures during outdoor sports, as well as indoor activities. The measured signals are transformed into quaternion to avoid the Gimbal lock effect. In order to improve the precision of the proposed motion capture system in an open and outdoor space, a frequency-adaptive sensor fusion method and a kinematic model are utilized to construct the whole body motion in real-time. The reference point is continuously updated by smart shoes that measure the ground reaction forces.



Author(s):  
Graciela Rodríguez-Vega ◽  
Dora Aydee Rodríguez-Vega ◽  
Xiomara Penelope Zaldívar-Colado ◽  
Ulises Zaldívar-Colado ◽  
Rafael Castillo-Ortega


Sign in / Sign up

Export Citation Format

Share Document