scholarly journals “Mine Works Better”: Examining the Influence of Embodiment in Virtual Reality on the Sense of Agency During a Binary Motor Imagery Task With a Brain-Computer Interface

2021 ◽  
Vol 12 ◽  
Author(s):  
Hamzah Ziadeh ◽  
David Gulyas ◽  
Louise Dørr Nielsen ◽  
Steffen Lehmann ◽  
Thomas Bendix Nielsen ◽  
...  

Motor imagery-based brain-computer interfaces (MI-BCI) have been proposed as a means for stroke rehabilitation, which combined with virtual reality allows for introducing game-based interactions into rehabilitation. However, the control of the MI-BCI may be difficult to obtain and users may face poor performance which frustrates them and potentially affects their motivation to use the technology. Decreases in motivation could be reduced by increasing the users' sense of agency over the system. The aim of this study was to understand whether embodiment (ownership) of a hand depicted in virtual reality can enhance the sense of agency to reduce frustration in an MI-BCI task. Twenty-two healthy participants participated in a within-subject study where their sense of agency was compared in two different embodiment experiences: 1) avatar hand (with body), or 2) abstract blocks. Both representations closed with a similar motion for spatial congruency and popped a balloon as a result. The hand/blocks were controlled through an online MI-BCI. Each condition consisted of 30 trials of MI-activation of the avatar hand/blocks. After each condition a questionnaire probed the participants' sense of agency, ownership, and frustration. Afterwards, a semi-structured interview was performed where the participants elaborated on their ratings. Both conditions supported similar levels of MI-BCI performance. A significant correlation between ownership and agency was observed (r = 0.47, p = 0.001). As intended, the avatar hand yielded much higher ownership than the blocks. When controlling for performance, ownership increased sense of agency. In conclusion, designers of BCI-based rehabilitation applications can draw on anthropomorphic avatars for the visual mapping of the trained limb to improve ownership. While not While not reducing frustration ownership can improve perceived agency given sufficient BCI performance. In future studies the findings should be validated in stroke patients since they may perceive agency and ownership differently than able-bodied users.

Author(s):  
Shaocheng Wang ◽  
Ehsan Tarkesh Esfahani ◽  
V. Sundararajan

Research in brain-computer interfaces have focused primarily on motor imagery tasks such as those involving movement of a cursor or other objects on a computer screen. In such applications, it is important to detect when the user is interested in moving an object and when the user is not active in this task. This paper evaluates the steady state visual evoked potential (SSVEP) as a feedback mechanism to confirm the mental state of the user during motor imagery. These potentials are evoked when a subject looks at a flashing objects of interest. Four different experiments are conducted in this paper. Subjects are asked to imagine the movement of flashing object in a given direction. If the subject is involved in this task, the SSVEP signal will be detectable in the visual cortex and therefore the motor imagery task is confirmed. During the experiment, EEG signal is recorded at 4 locations near visual cortex. Using a weighting scheme, the best combination of the recorded signal is selected to evaluate the presence of flashing frequency. The experimental result shows that the SSVEP can be detected even in complex motor imagery of flickering objects. The detection rate of 85% is achieved while the refreshing time for SSVEP feedback is set to 0.5 seconds.


2021 ◽  
Vol 15 ◽  
Author(s):  
Nikki Leeuwis ◽  
Alissa Paas ◽  
Maryam Alimardani

Brain-computer interfaces (BCIs) are communication bridges between a human brain and external world, enabling humans to interact with their environment without muscle intervention. Their functionality, therefore, depends on both the BCI system and the cognitive capacities of the user. Motor-imagery BCIs (MI-BCI) rely on the users’ mental imagination of body movements. However, not all users have the ability to sufficiently modulate their brain activity for control of a MI-BCI; a problem known as BCI illiteracy or inefficiency. The underlying mechanism of this phenomenon and the cause of such difference among users is yet not fully understood. In this study, we investigated the impact of several cognitive and psychological measures on MI-BCI performance. Fifty-five novice BCI-users participated in a left- versus right-hand motor imagery task. In addition to their BCI classification error rate and demographics, psychological measures including personality factors, affinity for technology, and motivation during the experiment, as well as cognitive measures including visuospatial memory and spatial ability and Vividness of Visual Imagery were collected. Factors that were found to have a significant impact on MI-BCI performance were Vividness of Visual Imagery, and the personality factors of orderliness and autonomy. These findings shed light on individual traits that lead to difficulty in BCI operation and hence can help with early prediction of inefficiency among users to optimize training for them.


2018 ◽  
Author(s):  
Hanna-Leena Halme ◽  
Lauri Parkkonen

AbstractLong calibration time hinders the feasibility of brain-computer interfaces (BCI). If other subjects’ data were used for training the classifier, BCI-based neurofeedback practice could start without the initial calibration. Here, we compare methods for inter-subject decoding of left- vs. right-hand motor imagery (MI) from MEG and EEG.Six methods were tested on data involving MEG and EEG measurements of healthy participants. Only subjects with good within-subject accuracies were selected for inter-subject decoding. Three methods were based on the Common Spatial Patterns (CSP) algorithm, and three others on logistic regression with l1 - or l2,1 -norm regularization. The decoding accuracy was evaluated using 1) MI and 2) passive movements (PM) for training, separately for MEG and EEG.When the classifier was trained by MI, the best accuracies across subjects (mean 70.6% for MEG, 67.7% for EEG) were obtained using multi-task learning (MTL) with logistic regression and l2,1-norm regularization. MEG yielded slightly better average accuracies than EEG. When PM were used for training, none of the inter-subject methods yielded above chance level (58.7%) accuracy.In conclusion, MTL and training with other subject’s MI is efficient for inter-subject decoding of MI. Passive movements of other subjects are likely suboptimal for training the MI classifiers.


2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
M. Iosa ◽  
G. Morone ◽  
A. Fusco ◽  
M. Bragoni ◽  
P. Coiro ◽  
...  

Stroke is the leading cause of long-term disability for adults in industrialized societies. Rehabilitation’s efforts are tended to avoid long-term impairments, but, actually, the rehabilitative outcomes are still poor. Novel tools based on new technologies have been developed to improve the motor recovery. In this paper, we have taken into account seven promising technologies that can improve rehabilitation of patients with stroke in the early future: (1) robotic devices for lower and upper limb recovery, (2) brain computer interfaces, (3) noninvasive brain stimulators, (4) neuroprostheses, (5) wearable devices for quantitative human movement analysis, (6) virtual reality, and (7) tablet-pc used for neurorehabilitation.


2011 ◽  
Vol 29 (supplement) ◽  
pp. 352-377 ◽  
Author(s):  
Seon Hee Jang ◽  
Frank E Pollick

The study of dance has been helpful to advance our understanding of how human brain networks of action observation are influenced by experience. However previous studies have not examined the effect of extensive visual experience alone: for example, an art critic or dance fan who has a rich experience of watching dance but negligible experience performing dance. To explore the effect of pure visual experience we performed a single experiment using functional Magnetic Resonance Imaging (fMRI) to compare the neural processing of dance actions in 3 groups: a) 14 ballet dancers, b) 10 experienced viewers, c) 12 novices without any extensive dance or viewing experience. Each of the 36 participants viewed short 2-second displays of ballet derived from motion capture of a professional ballerina. These displays represented the ballerina as only points of light at the major joints. We wished to study the action observation network broadly and thus included two different types of display and two different tasks for participants to perform. The two different displays were: a) brief movies of a ballet action and b) frames from the ballet movies with the points of lights connected by lines to show a ballet posture. The two different tasks were: a) passively observe the display and b) imagine performing the action depicted in the display. The two levels of display and task were combined factorially to produce four experimental conditions (observe movie, observe posture, motor imagery of movie, motor imagery of posture). The set of stimuli used in the experiment are available for download after this paper. A random effects ANOVA was performed on brain activity and an effect of experience was obtained in seven different brain areas including: right Temporoparietal Junction (TPJ), left Retrosplenial Cortex (RSC), right Primary Somatosensory Cortex (S1), bilateral Primary Motor Cortex (M1), right Orbitofrontal Cortex (OFC), right Temporal Pole (TP). The patterns of activation were plotted in each of these areas (TPJ, RSC, S1, M1, OFC, TP) to investigate more closely how the effect of experience changed across these areas. For this analysis, novices were treated as baseline and the relative effect of experience examined in the dancer and experienced viewer groups. Interpretation of these results suggests that both visual and motor experience appear equivalent in producing more extensive early processing of dance actions in early stages of representation (TPJ and RSC) and we hypothesise that this could be due to the involvement of autobiographical memory processes. The pattern of results found for dancers in S1 and M1 suggest that their perception of dance actions are enhanced by embodied processes. For example, the S1 results are consistent with claims that this brain area shows mirror properties. The pattern of results found for the experienced viewers in OFC and TP suggests that their perception of dance actions are enhanced by cognitive processes. For example, involving aspects of social cognition and hedonic processing – the experienced viewers find the motor imagery task more pleasant and have richer connections of dance to social memory. While aspects of our interpretation are speculative the core results clearly show common and distinct aspects of how viewing experience and physical experience shape brain responses to watching dance.


Author(s):  
Yu-Sheng Yang ◽  
Alicia M. Koontz ◽  
Yu-Hsuan Hsiao ◽  
Cheng-Tang Pan ◽  
Jyh-Jong Chang

Maneuvering a wheelchair is an important necessity for the everyday life and social activities of people with a range of physical disabilities. However, in real life, wheelchair users face several common challenges: articulate steering, spatial relationships, and negotiating obstacles. Therefore, our research group has developed a head-mounted display (HMD)-based intuitive virtual reality (VR) stimulator for wheelchair propulsion. The aim of this study was to investigate the feasibility and efficacy of this VR stimulator for wheelchair propulsion performance. Twenty manual wheelchair users (16 men and 4 women) with spinal cord injuries ranging from T8 to L2 participated in this study. The differences in wheelchair propulsion kinematics between immersive and non-immersive VR environments were assessed using a 3D motion analysis system. Subjective data of the HMD-based intuitive VR stimulator were collected with a Presence Questionnaire and individual semi-structured interview at the end of the trial. Results indicated that propulsion performance was very similar in terms of start angle (p = 0.34), end angle (p = 0.46), stroke angle (p = 0.76), and shoulder movement (p = 0.66) between immersive and non-immersive VR environments. In the VR episode featuring an uphill journey, an increase in propulsion speed (p < 0.01) and cadence (p < 0.01) were found, as well as a greater trunk forward inclination (p = 0.01). Qualitative interviews showed that this VR simulator made an attractive, novel impression and therefore demonstrated the potential as a tool for stimulating training motivation. This HMD-based intuitive VR stimulator can be an effective resource to enhance wheelchair maneuverability experiences.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Dheeraj Rathee ◽  
Haider Raza ◽  
Sujit Roy ◽  
Girijesh Prasad

AbstractRecent advancements in magnetoencephalography (MEG)-based brain-computer interfaces (BCIs) have shown great potential. However, the performance of current MEG-BCI systems is still inadequate and one of the main reasons for this is the unavailability of open-source MEG-BCI datasets. MEG systems are expensive and hence MEG datasets are not readily available for researchers to develop effective and efficient BCI-related signal processing algorithms. In this work, we release a 306-channel MEG-BCI data recorded at 1KHz sampling frequency during four mental imagery tasks (i.e. hand imagery, feet imagery, subtraction imagery, and word generation imagery). The dataset contains two sessions of MEG recordings performed on separate days from 17 healthy participants using a typical BCI imagery paradigm. The current dataset will be the only publicly available MEG imagery BCI dataset as per our knowledge. The dataset can be used by the scientific community towards the development of novel pattern recognition machine learning methods to detect brain activities related to motor imagery and cognitive imagery tasks using MEG signals.


2021 ◽  
Vol 14 (2) ◽  
pp. 205979912110307
Author(s):  
Dennis Mathysen ◽  
Ignace Glorieux

Virtual reality (VR) is still very much a niche technology despite its increasing popularity since recent years. VR has now reached a point where it can offer photorealistic experiences, while also being consumer-friendly and affordable. However, so far only a very limited amount of software has been developed for the specific purpose of conducting (social science) research. In this article, we illustrate that integrating virtual reality to good effect in social science research does not necessarily require specialized hardware or software, an abundance of expertise regarding VR-technology or even a large budget. We do this by discussing our use of a method we have come to call ‘VR-assisted interviews’: conducting a (semi-structured) interview while respondents are confronted with a virtual environment viewed via a VR-headset. This method allows respondents to focus on what they are seeing and experiencing, instead of having them worry about how to operate a device and navigate an interface they are using for the first time. ‘VR-assisted interviews’ are very user-friendly for respondents but also limits options for interactiveness. We believe this method can be a valuable alternative, both because of methodological and practical considerations, for more complex applications of VR-technology in social science research.


Sign in / Sign up

Export Citation Format

Share Document