scholarly journals Smart Glasses for Visually Evoked Potential Applications: Characterisation of the Optical Output for Different Display Technologies

2021 ◽  
Vol 10 (1) ◽  
pp. 33
Author(s):  
Alessandro Cultrera ◽  
Pasquale Arpaia ◽  
Luca Callegaro ◽  
Antonio Esposito ◽  
Massimo Ortolano

Off-the-shelf consumer-grade smart glasses are being increasingly used in extended reality and brain–computer interface applications that are based on the detection of visually evoked potentials from the user’s brain. The displays of these kinds of devices can be based on different technologies, which may affect the nature of the visual stimulus received by the user. This aspect has substantial impact in the field of applications based on wearable sensors and devices. We measured the optical output of three models of smart glasses with different display technologies using a photo-transducer in order to gain insight on their exploitability in brain–computer interface applications. The results suggest that preferring a particular model of smart glasses may strongly depend on the specific application requirements.

2018 ◽  
Vol 63 (4) ◽  
pp. 377-382
Author(s):  
Katharina Olze ◽  
Christof Jan Wehrmann ◽  
Luyang Mu ◽  
Meinhard Schilling

Abstract In brain computer interface (BCI) applications, the use of steady-state visually evoked potentials (SSVEPs) is common. Therefore, a visual stimulation with a constant repetition frequency is necessary. However, using a computer monitor, the set of frequencies that can be used is restricted by the refresh rate of the screen. Frequencies that are not an integer divisor of the refresh rate cannot be displayed correctly. Furthermore, the programming language the stimulation software is written in and the operating system influence the actually generated and presented frequencies. The aim of this paper is to identify the main challenges in generating SSVEP stimulation using a computer screen with and without using DirectX in Windows-based PC systems and to provide solutions for these issues.


2021 ◽  
pp. 1-13
Author(s):  
Hamidreza Maymandi ◽  
Jorge Luis Perez Benitez ◽  
F. Gallegos-Funes ◽  
J. A. Perez Benitez

2020 ◽  
Vol 16 (2) ◽  
Author(s):  
Stanisław Karkosz ◽  
Marcin Jukiewicz

AbstractObjectivesOptimization of Brain-Computer Interface by detecting the minimal number of morphological features of signal that maximize accuracy.MethodsSystem of signal processing and morphological features extractor was designed, then the genetic algorithm was used to select such characteristics that maximize the accuracy of the signal’s frequency recognition in offline Brain-Computer Interface (BCI).ResultsThe designed system provides higher accuracy results than a previously developed system that uses the same preprocessing methods, however, different results were achieved for various subjects.ConclusionsIt is possible to enhance the previously developed BCI by combining it with morphological features extraction, however, it’s performance is dependent on subject variability.


Author(s):  
Yao Li ◽  
T. Kesavadas

Abstract One of the expectations for the next generation of industrial robots is to work collaboratively with humans as robotic co-workers. Robotic co-workers must be able to communicate with human collaborators intelligently and seamlessly. However, industrial robots in prevalence are not good at understanding human intentions and decisions. We demonstrate a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) which can directly deliver human cognition to robots through a headset. The BCI is applied to a part-picking robot and sends decisions to the robot while operators visually inspecting the quality of parts. The BCI is verified through a human subject study. In the study, a camera by the side of the conveyor takes photos of each part and presents it to the operator automatically. When the operator looks at the photo, the electroencephalography (EEG) is collected through BCI. The inspection decision is extracted through SSVEPs in EEG. When a defective part is identified by the operator, the signal is communicated to the robot which locates the defective part through a second camera and removes it from the conveyor. The robot can grasp various part with our grasp planning algorithm (2FRG). We have developed a CNN-CCA model for SSVEP extraction. The model is trained on a dataset collected in our offline experiment. Our approach outperforms the existing CCA, CCA-SVM, and PSD-SVM models. The CNN-CCA is further validated in an online experiment that achieves 93% accuracy in identifying and removing a defective part.


Sign in / Sign up

Export Citation Format

Share Document