scholarly journals The effect of a virtual reality environment on gaze behaviour and motor skill learning

Author(s):  
David Harris

Virtual reality (VR) systems hold significant potential for both training and experimentation purposes as they provide precise control over the environment and the possibility to untether tasks from their normal physical constraints. However, the artificial creation of depth in stereoscopic displays, and reduced availability of haptic information, may affect how visually-guided motor tasks are performed in the virtual world. If so, tasks learned in VR may be unrepresentative of real skills, and therefore unlikely to elicit positive transfer to the real-world. In Experiment 1 we tested whether learning a visually-guided motor skill (golf putting) in virtual reality could transfer to real-world improvements. Despite the perceptual limitations imposed by the virtual environment, training novice golfers in VR led to improvements in real putting that were comparable to real-world practice. Experiment 2 explored these effects in more skilled golfers, and examined changes in gaze behaviour (quiet eye) that resulted from the more immediate use of VR (i.e. as a tool for ‘warming up’). VR use was found to cause impairments to gaze control (quiet eye) and putting accuracy, when used immediately prior to real world putting. Overall, these findings demonstrate the potential for VR training, but also highlight that fundamental questions remain about how the altered perceptual environment of VR affects visually-guided skills.

2020 ◽  
Vol 50 ◽  
pp. 101721
Author(s):  
David J. Harris ◽  
Gavin Buckingham ◽  
Mark R. Wilson ◽  
Jack Brookes ◽  
Faisal Mushtaq ◽  
...  

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Lasse Christiansen ◽  
Malte Nejst Larsen ◽  
Mads Just Madsen ◽  
Michael James Grey ◽  
Jens Bo Nielsen ◽  
...  

Abstract Motor skill acquisition depends on central nervous plasticity. However, behavioural determinants leading to long lasting corticospinal plasticity and motor expertise remain unexplored. Here we investigate behavioural and electrophysiological effects of individually tailored progressive practice during long-term motor skill training. Two groups of participants practiced a visuomotor task requiring precise control of the right digiti minimi for 6 weeks. One group trained with constant task difficulty, while the other group trained with progressively increasing task difficulty, i.e. continuously adjusted to their individual skill level. Compared to constant practice, progressive practice resulted in a two-fold greater performance at an advanced task level and associated increases in corticospinal excitability. Differences were maintained 8 days later, whereas both groups demonstrated equal retention 14 months later. We demonstrate that progressive practice enhances motor skill learning and promotes corticospinal plasticity. These findings underline the importance of continuously challenging patients and athletes to promote neural plasticity, skilled performance, and recovery.


F1000Research ◽  
2014 ◽  
Vol 3 ◽  
pp. 72
Author(s):  
Virginia Way Tong Chu ◽  
Terence David Sanger

Practice of movement in virtual-reality and other artificially altered environments has been proposed as a method for rehabilitation following neurological injury and for training new skills in healthy humans.  For such training to be useful, there must be transfer of learning from the artificial environment to the performance of desired skills in the natural environment.  Therefore an important assumption of such methods is that practice in the altered environment engages the same learning and plasticity mechanisms that are required for skill performance in the natural environment.  We test the hypothesis that transfer of learning may fail because the learning and plasticity mechanism that adapts to the altered environment is different from the learning mechanism required for improvement of motor skill.  In this paper, we propose that a model that separates skill learning and environmental adaptation is necessary to explain the learning and aftereffects that are observed in virtual reality experiments.  In particular, we studied the condition where practice in the altered environment should lead to correct skill performance in the original environment. Our 2-mechanism model predicts that aftereffects will still be observed when returning to the original environment, indicating a lack of skill transfer from the artificial environment to the original environment. To illustrate the model prediction, we tested 10 healthy participants on the interaction between a simple overlearned motor skill (straight hand movements to targets in different directions) and an artificially altered visuomotor environment (rotation of visual feedback of the results of movement).  As predicted by the models, participants show adaptation to the altered environment and after-effects on return to the baseline environment even when practice in the altered environment should have led to correct skill performance.  The presence of aftereffect under all conditions that involved changes in environment demonstrates separation of environmental adaptation and skill learning. Our results support the existence of two distinct learning modules with different adaptation properties.  Therefore we suggest that adaptation to an altered environment may not be useful for training new skills.


Perception ◽  
2021 ◽  
Vol 50 (9) ◽  
pp. 783-796
Author(s):  
Lisa P. Y. Lin ◽  
Christopher J. Plack ◽  
Sally A. Linkenauger

The ability to accurately perceive the extent over which one can act is requisite for the successful execution of visually guided actions. Yet, like other outcomes of perceptual-motor experience, our perceived action boundaries are not stagnant, but in constant flux. Hence, the perceptual systems must account for variability in one’s action capabilities in order for the perceiver to determine when they are capable of successfully performing an action. Recent work has found that, after reaching with a virtual arm that varied between short and long each time they reach, individuals determined their perceived action boundaries using the most liberal reaching experience. However, these studies were conducted in virtual reality, and the perceptual systems may handle variability differently in a real-world setting. To test this hypothesis, we created a modified orthopedic elbow brace that mimics injury in the upper limb by restricting elbow extension via remote control. Participants were asked to make reachability judgments after training in which the maximum extent of their reaching ability was either unconstricted, constricted or variable over several calibration trials. Findings from the current study did not conform to those in virtual reality; participants were more conservative with their reachability estimates after experiencing variability in a real-world setting.


2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Samuel James Vine

Directing ocular fixations towards a target assists the planning and control of visually-guided actions. In far aiming tasks, the quiet eye, an instance of pre-movement gaze anchoring, has been extensively studied as a key performance variable. However, theories of quiet eye are yet to establish the exact functional role of the location and duration of the fixation. The present work used immersive virtual reality to manipulate key parameters of the quiet eye – location (experiment 1) and duration (experiment 2) – to test competing theoretical predictions about their importance. Across two pre-registered experiments, novice participants (n=127) completed a series of golf putts while their eye movements, putting accuracy, and putting kinematics were recorded. In experiment 1, participants’ pre-movement fixation was cued to locations on the ball, near the ball, and far from the ball. In experiment 2, long and short quiet eye durations were induced using auditory tones as cues to movement phases. Linear mixed effects models indicated that manipulations of location and duration had little effect on performance or movement kinematics. The findings suggest that, for novices, the spatial and temporal parameters of the final fixation may not be critical for movement pre-programming and may instead reflect attentional control or movement inhibition functions.


2020 ◽  
Author(s):  
Oliver Jacobs ◽  
Nicola C Anderson ◽  
Walter F. Bischof ◽  
Alan Kingstone

People naturally move both their head and eyes to attend to information. Yet, little is known about how the head and eyes coordinate in attentional selection due to the relative sparsity of past work that has simultaneously measured head and gaze behaviour. In the present study, participants were asked to view fully immersive 360-degree scenes using a virtual reality headset with built-in eye tracking. Participants viewed these scenes through a small moving window that was yoked either to their head or gaze movements. We found that limiting peripheral information via the head- or gaze-contingent windows affected head and gaze movements differently. Compared with free viewing, gaze-contingent viewing was more disruptive than head-contingent viewing, indicating that gaze-based selection is more reliant on peripheral information than head-based selection. These data dovetail with the nested effectors hypothesis, which proposes that people prefer to use their head for exploration into non-visible space while using their eyes to exploit visible or semi-visible areas of space. This suggests that real-world orienting may be more head-based than previously thought. Our work also highlights the utility, ecological validity, and future potential of unconstrained head and eye tracking in virtual reality.


2008 ◽  
Author(s):  
Michelle V. Thompson ◽  
Janet L. Utschig ◽  
Mikaela K. Vaughan ◽  
Marc V. Richard ◽  
Benjamin A. Clegg

2020 ◽  
Author(s):  
David Harris ◽  
Mark Wilson ◽  
Tim Holmes ◽  
Toby de Burgh ◽  
Samuel James Vine

Head-mounted eye tracking has been fundamental for developing an understanding of sporting expertise, as the way in which performers sample visual information from the environment is a major determinant of successful performance. There is, however, a long running tension between the desire to study realistic, in-situ gaze behaviour and the difficulties of acquiring accurate ocular measurements in dynamic and fast-moving sporting tasks. Here, we describe how immersive technologies, such as virtual reality, offer an increasingly compelling approach for conducting eye movement research in sport. The possibility of studying gaze behaviour in representative and realistic environments, but with high levels of experimental control, could enable significant strides forward for eye tracking in sport and improve understanding of how eye movements underpin sporting skills. By providing a rationale for virtual reality as an optimal environment for eye tracking research, as well as outlining practical considerations related to hardware, software and data analysis, we hope to guide researchers and practitioners in the use of this approach.


Sign in / Sign up

Export Citation Format

Share Document