scholarly journals A review-classification of electrooculogram based human computer interfaces

2018 ◽  
Vol 29 (6) ◽  
Author(s):  
S Ramkumar ◽  
K Sathesh Kumar ◽  
T Dhiliphan Rajkumar ◽  
M Ilayaraja ◽  
K Shankar
Author(s):  
Vladimir Ortega-Gonza´lez ◽  
Samir Garbaya ◽  
Fre´de´ric Merienne

In this paper we briefly describe an approach for understanding the psychoacoustic and perceptual effects of what we have identified as the high-level spatial properties of 3D audio. The necessity of this study is firstly presented within the context of interactive applications such as Virtual Reality and Human Computer Interfaces. As a result of the bibliographic research in the field we identified the main potential functions of 3D audio spatial stimulation in interactive applications beyond traditional sound spatialization. In the same sense, a classification of the high-level aspects involved in spatial audio stimulation is proposed and explained. Immediately, the case of study, the experimental methodology and the framework are described. Finally, we present the expected results as well as their usefulness within the context of a larger project.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2443
Author(s):  
Jayro Martínez-Cerveró ◽  
Majid Khalili Ardali ◽  
Andres Jaramillo-Gonzalez ◽  
Shizhe Wu ◽  
Alessandro Tonin ◽  
...  

Electrooculography (EOG) signals have been widely used in Human-Computer Interfaces (HCI). The HCI systems proposed in the literature make use of self-designed or closed environments, which restrict the number of potential users and applications. Here, we present a system for classifying four directions of eye movements employing EOG signals. The system is based on open source ecosystems, the Raspberry Pi single-board computer, the OpenBCI biosignal acquisition device, and an open-source python library. The designed system provides a cheap, compact, and easy to carry system that can be replicated or modified. We used Maximum, Minimum, and Median trial values as features to create a Support Vector Machine (SVM) classifier. A mean of 90% accuracy was obtained from 7 out of 10 subjects for online classification of Up, Down, Left, and Right movements. This classification system can be used as an input for an HCI, i.e., for assisted communication in paralyzed people.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
S. Mala ◽  
K. Latha

Activity recognition is needed in different requisition, for example, reconnaissance system, patient monitoring, and human-computer interfaces. Feature selection plays an important role in activity recognition, data mining, and machine learning. In selecting subset of features, an efficient evolutionary algorithm Differential Evolution (DE), a very efficient optimizer, is used for finding informative features from eye movements using electrooculography (EOG). Many researchers use EOG signals in human-computer interactions with various computational intelligence methods to analyze eye movements. The proposed system involves analysis of EOG signals using clearness based features, minimum redundancy maximum relevance features, and Differential Evolution based features. This work concentrates more on the feature selection algorithm based on DE in order to improve the classification for faultless activity recognition.


2021 ◽  
Vol 18 (3) ◽  
pp. 1-22
Author(s):  
Charlotte M. Reed ◽  
Hong Z. Tan ◽  
Yang Jiao ◽  
Zachary D. Perez ◽  
E. Courtenay Wilson

Stand-alone devices for tactile speech reception serve a need as communication aids for persons with profound sensory impairments as well as in applications such as human-computer interfaces and remote communication when the normal auditory and visual channels are compromised or overloaded. The current research is concerned with perceptual evaluations of a phoneme-based tactile speech communication device in which a unique tactile code was assigned to each of the 24 consonants and 15 vowels of English. The tactile phonemic display was conveyed through an array of 24 tactors that stimulated the dorsal and ventral surfaces of the forearm. Experiments examined the recognition of individual words as a function of the inter-phoneme interval (Study 1) and two-word phrases as a function of the inter-word interval (Study 2). Following an average training period of 4.3 hrs on phoneme and word recognition tasks, mean scores for the recognition of individual words in Study 1 ranged from 87.7% correct to 74.3% correct as the inter-phoneme interval decreased from 300 to 0 ms. In Study 2, following an average of 2.5 hours of training on the two-word phrase task, both words in the phrase were identified with an accuracy of 75% correct using an inter-word interval of 1 sec and an inter-phoneme interval of 150 ms. Effective transmission rates achieved on this task were estimated to be on the order of 30 to 35 words/min.


2021 ◽  
Vol 11 (11) ◽  
pp. 4922
Author(s):  
Tengfei Ma ◽  
Wentian Chen ◽  
Xin Li ◽  
Yuting Xia ◽  
Xinhua Zhu ◽  
...  

To explore whether the brain contains pattern differences in the rock–paper–scissors (RPS) imagery task, this paper attempts to classify this task using fNIRS and deep learning. In this study, we designed an RPS task with a total duration of 25 min and 40 s, and recruited 22 volunteers for the experiment. We used the fNIRS acquisition device (FOIRE-3000) to record the cerebral neural activities of these participants in the RPS task. The time series classification (TSC) algorithm was introduced into the time-domain fNIRS signal classification. Experiments show that CNN-based TSC methods can achieve 97% accuracy in RPS classification. CNN-based TSC method is suitable for the classification of fNIRS signals in RPS motor imagery tasks, and may find new application directions for the development of brain–computer interfaces (BCI).


Leonardo ◽  
2009 ◽  
Vol 42 (5) ◽  
pp. 439-442 ◽  
Author(s):  
Eduardo R. Miranda ◽  
John Matthias

Music neurotechnology is a new research area emerging at the crossroads of neurobiology, engineering sciences and music. Examples of ongoing research into this new area include the development of brain-computer interfaces to control music systems and systems for automatic classification of sounds informed by the neurobiology of the human auditory apparatus. The authors introduce neurogranular sampling, a new sound synthesis technique based on spiking neuronal networks (SNN). They have implemented a neurogranular sampler using the SNN model developed by Izhikevich, which reproduces the spiking and bursting behavior of known types of cortical neurons. The neurogranular sampler works by taking short segments (or sound grains) from sound files and triggering them when any of the neurons fire.


Sign in / Sign up

Export Citation Format

Share Document