Knowledge Discovery and Multimodal Inputs for Driving an Intelligent Wheelchair

Author(s):  
Brígida Mónica Faria ◽  
Luís Paulo Reis ◽  
Nuno Lau

Cerebral Palsy is defined as a group of permanent disorders in the development of movement and posture. The motor disorders in cerebral palsy are associated with deficits of perception, cognition, communication, and behaviour, which can affect autonomy and independence. The interface between the user and an intelligent wheelchair can be done with several input devices such as joysticks, microphones, and brain computer interfaces (BCI). BCI enables interaction between users and hardware systems through the recognition of brainwave activity. The current BCI systems have very low accuracy on the recognition of facial expressions and thoughts, making it difficult to use these devices to enable safe and robust commands of complex devices like an Intelligent Wheelchair. This paper presents an approach to expand the use of a brain computer interface for driving an intelligent wheelchair by patients suffering from cerebral palsy. The ability with the joystick, head movements, and voice inputs were tested, and the best possibility for driving the wheelchair is given to a specific user. Experiments were performed using 30 individuals suffering from IV and V degrees of cerebral palsy on the Gross Motor Function (GMF) measure. The results show that the pre-processing and variable selection methods are effective to improve the results of a commercial BCI product by 57%. With the developed system, it is also possible for users to perform a circuit in a simulated environment using just facial expressions and thoughts.

Author(s):  
I. Scott Mackenzie

One enduring trait of computing systems is the presence of the human operator. At the human-computer interface, the nature of computing has witnessed dramatic transformations—from feeding punched cards into a reader to manipulating 3D virtual objects with an input glove. The technology at our fingertips today transcends by orders of magnitude that in the behemoth calculators of the 1940s. Yet technology must co-exist with the human interface of the day. Not surprisingly, themes on keeping pace with advances in technology in the human-computer interface and, hopefully, getting ahead, underlie many chapters in this book. The present chapter is no exception. Input devices and interaction techniques are the human operator’s baton. They set, constrain, and elicit a spectrum of actions and responses, and in a large way inject a personality on the entire human-machine system. In this chapter, we will present and explore the major issues in “input,” focusing on devices, their properties and parameters, and the possibilities for exploiting devices in advanced human-computer interfaces. To place input devices in perspective, we illustrate a classical human-factors interpretation of the human-machine interface (e.g., Chapanis, 1965, p. 20). Figure 11-1 simplifies the human and machine to three components each. The internal states of each interact in a closed-loop system through controls and displays (the machine interface) and motor-sensory behaviour (the human interface). The terms “input” and “output” are, by convention, with respect to the machine; so input devices are inputs to the machine controlled or manipulated by human “outputs.” Traditionally human outputs are our limbs—the hands, arms, legs, feet, or head—but speech and eye motions can also act as human output. Some other human output channels are breath and electrical body signals (important for disabled users). Interaction takes place at the interface (dashed line in Figure 11-1) through an output channel—displays stimulating human senses—and the input channel. In the present chapter, we are primarily interested in controls, or input devices; but, by necessity, the other components in Figure 11-1 will to some extent participate in our discussion.


Author(s):  
Wakana Ishihara ◽  
Karen Moxon ◽  
Sheryl Ehrman ◽  
Mark Yarborough ◽  
Tina L. Panontin ◽  
...  

This systematic review addresses the plausibility of using novel feedback modalities for brain–computer interface (BCI) and attempts to identify the best feedback modality on the basis of the effectiveness or learning rate. Out of the chosen studies, it was found that 100% of studies tested visual feedback, 31.6% tested auditory feedback, 57.9% tested tactile feedback, and 21.1% tested proprioceptive feedback. Visual feedback was included in every study design because it was intrinsic to the response of the task (e.g. seeing a cursor move). However, when used alone, it was not very effective at improving accuracy or learning. Proprioceptive feedback was most successful at increasing the effectiveness of motor imagery BCI tasks involving neuroprosthetics. The use of auditory and tactile feedback resulted in mixed results. The limitations of this current study and further study recommendations are discussed.


2021 ◽  
pp. 003329412110184
Author(s):  
Paola Surcinelli ◽  
Federica Andrei ◽  
Ornella Montebarocci ◽  
Silvana Grandi

Aim of the research The literature on emotion recognition from facial expressions shows significant differences in recognition ability depending on the proposed stimulus. Indeed, affective information is not distributed uniformly in the face and recent studies showed the importance of the mouth and the eye regions for a correct recognition. However, previous studies used mainly facial expressions presented frontally and studies which used facial expressions in profile view used a between-subjects design or children faces as stimuli. The present research aims to investigate differences in emotion recognition between faces presented in frontal and in profile views by using a within subjects experimental design. Method The sample comprised 132 Italian university students (88 female, Mage = 24.27 years, SD = 5.89). Face stimuli displayed both frontally and in profile were selected from the KDEF set. Two emotion-specific recognition accuracy scores, viz., frontal and in profile, were computed from the average of correct responses for each emotional expression. In addition, viewing times and response times (RT) were registered. Results Frontally presented facial expressions of fear, anger, and sadness were significantly better recognized than facial expressions of the same emotions in profile while no differences were found in the recognition of the other emotions. Longer viewing times were also found when faces expressing fear and anger were presented in profile. In the present study, an impairment in recognition accuracy was observed only for those emotions which rely mostly on the eye regions.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Dheeraj Rathee ◽  
Haider Raza ◽  
Sujit Roy ◽  
Girijesh Prasad

AbstractRecent advancements in magnetoencephalography (MEG)-based brain-computer interfaces (BCIs) have shown great potential. However, the performance of current MEG-BCI systems is still inadequate and one of the main reasons for this is the unavailability of open-source MEG-BCI datasets. MEG systems are expensive and hence MEG datasets are not readily available for researchers to develop effective and efficient BCI-related signal processing algorithms. In this work, we release a 306-channel MEG-BCI data recorded at 1KHz sampling frequency during four mental imagery tasks (i.e. hand imagery, feet imagery, subtraction imagery, and word generation imagery). The dataset contains two sessions of MEG recordings performed on separate days from 17 healthy participants using a typical BCI imagery paradigm. The current dataset will be the only publicly available MEG imagery BCI dataset as per our knowledge. The dataset can be used by the scientific community towards the development of novel pattern recognition machine learning methods to detect brain activities related to motor imagery and cognitive imagery tasks using MEG signals.


2020 ◽  
Vol 16 (2) ◽  
Author(s):  
Stanisław Karkosz ◽  
Marcin Jukiewicz

AbstractObjectivesOptimization of Brain-Computer Interface by detecting the minimal number of morphological features of signal that maximize accuracy.MethodsSystem of signal processing and morphological features extractor was designed, then the genetic algorithm was used to select such characteristics that maximize the accuracy of the signal’s frequency recognition in offline Brain-Computer Interface (BCI).ResultsThe designed system provides higher accuracy results than a previously developed system that uses the same preprocessing methods, however, different results were achieved for various subjects.ConclusionsIt is possible to enhance the previously developed BCI by combining it with morphological features extraction, however, it’s performance is dependent on subject variability.


2014 ◽  
Vol 10 (3) ◽  
pp. 216-220 ◽  
Author(s):  
Michael T. McCann ◽  
David E. Thompson ◽  
Zeeshan H. Syed ◽  
Jane E. Huggins

2013 ◽  
Vol 475-476 ◽  
pp. 1230-1234
Author(s):  
Guo Qing Huang ◽  
Tong Hua Yang ◽  
Sheng Xu

Virtual reality (VR) is a computer-simulated environment that can simulate physical presence in places in the real world or imagined worlds. It is new comprehensive information technology which enables users to "access" to the computer-simulated environment through the use of standard input devices and realize the direct interaction between users and the simulated environment. With a case study by using the theory of visual reality technology, this thesis analysises the application types and application methods of visual reality technology as well as the existing problems and solutions during the application process of visual reality technology.


Sign in / Sign up

Export Citation Format

Share Document