scholarly journals P300 Brain–Computer Interface-Based Drone Control in Virtual and Augmented Reality

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5765
Author(s):  
Soram Kim ◽  
Seungyun Lee ◽  
Hyunsuk Kang ◽  
Sion Kim ◽  
Minkyu Ahn

Since the emergence of head-mounted displays (HMDs), researchers have attempted to introduce virtual and augmented reality (VR, AR) in brain–computer interface (BCI) studies. However, there is a lack of studies that incorporate both AR and VR to compare the performance in the two environments. Therefore, it is necessary to develop a BCI application that can be used in both VR and AR to allow BCI performance to be compared in the two environments. In this study, we developed an opensource-based drone control application using P300-based BCI, which can be used in both VR and AR. Twenty healthy subjects participated in the experiment with this application. They were asked to control the drone in two environments and filled out questionnaires before and after the experiment. We found no significant (p > 0.05) difference in online performance (classification accuracy and amplitude/latency of P300 component) and user experience (satisfaction about time length, program, environment, interest, difficulty, immersion, and feeling of self-control) between VR and AR. This indicates that the P300 BCI paradigm is relatively reliable and may work well in various situations.

Sensors ◽  
2021 ◽  
Vol 21 (6) ◽  
pp. 2234
Author(s):  
Sebastian Kapp ◽  
Michael Barz ◽  
Sergey Mukhametov ◽  
Daniel Sonntag ◽  
Jochen Kuhn

Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pei-Luen Patrick Rau ◽  
Jian Zheng ◽  
Zhi Guo

Purpose This study aims to investigate “immersive reading,” which occurs when individuals read text while in a virtual reality (VR) or augmented reality (AR) environment. Design/methodology/approach In Experiment 1, 64 participants read text passages and answered multiple-choice questions in VR and AR head-mounted displays (HMDs) compared with doing the same task on liquid crystal display (LCD). In Experiment 2, 31 participants performed the same reading tasks but with two VR HMDs of different display quality. Findings Compared with reading on LCD as the baseline, participants reading in VR and AR HMDs got 82% (VR) and 88% (AR) of the information accurately. Participants tended to respond more accurately and faster, though not statistically significant, with the VR HMD of higher pixel density in the speed-reading task. Originality/value The authors observed the speed and accuracy of reading in VR and AR environments, compared with the reading speed and accuracy on an LCD monitor. The authors also compared the reading performance on two VR HMDs that differed in display quality but were otherwise similar in every way.


Author(s):  
Pierre-Yves Laffont ◽  
Ali Hasnain ◽  
Shukri B. Jalil ◽  
Kutluhan Buyukburc ◽  
Pierre-Yves Guillemet ◽  
...  

2018 ◽  
Vol 7 (4) ◽  
pp. 2722
Author(s):  
Raymond Sutjiadi ◽  
Timothy John Pattiasina ◽  
Resmana Lim

In this research, a Brain Computer Interface (BCI) based on Steady State Visually Evoked Potential (SSVEP) for computer control applications using Support Vector Machine (SVM) is presented. For many years, people have speculated that electroencephalographic activities or other electrophysiological measures of brain function might provide a new non-muscular channel that can be used for sending messages or commands to the external world. BCI is a fast-growing emergent technology in which researchers aim to build a direct channel between the human brain and the computer. BCI systems provide a new communication channel for disabled people. Among many different types of the BCI systems, the SSVEP based has attracted more attention due to its ease of use and signal processing. SSVEPs are usually detected from the occipital lobe of the brain when the subject is looking at a twinkling light source. In this paper, SVM is used to classify SSVEP based on electroencephalogram data with proper features. Based on the experiment utilizing a 14-channel Electroencephalography (EEG) device, 80 percent of accuracy can be reached by our SSVEP-based BCI system using Linear SVM Kernel as classification engine. 


2020 ◽  
Vol 69 (4) ◽  
pp. 1530-1539 ◽  
Author(s):  
Leopoldo Angrisani ◽  
Pasquale Arpaia ◽  
Antonio Esposito ◽  
Nicola Moccaldi

Sign in / Sign up

Export Citation Format

Share Document