Vigilance Estimating in SSVEP-Based BCI Using Multimodal Signals

Author(s):  
Kangning Wang ◽  
Shuang Qiu ◽  
Wei Wei ◽  
Chuncheng Zhang ◽  
Huiguang He ◽  
...  
Keyword(s):  
2021 ◽  
Vol 2 (02) ◽  
pp. 52-58
Author(s):  
Sharmeen M.Saleem Abdullah Abdullah ◽  
Siddeeq Y. Ameen Ameen ◽  
Mohammed Mohammed sadeeq ◽  
Subhi Zeebaree

New research into human-computer interaction seeks to consider the consumer's emotional status to provide a seamless human-computer interface. This would make it possible for people to survive and be used in widespread fields, including education and medicine. Multiple techniques can be defined through human feelings, including expressions, facial images, physiological signs, and neuroimaging strategies. This paper presents a review of emotional recognition of multimodal signals using deep learning and comparing their applications based on current studies. Multimodal affective computing systems are studied alongside unimodal solutions as they offer higher accuracy of classification. Accuracy varies according to the number of emotions observed, features extracted, classification system and database consistency. Numerous theories on the methodology of emotional detection and recent emotional science address the following topics. This would encourage studies to understand better physiological signals of the current state of the science and its emotional awareness problems.


2016 ◽  
Vol 11 (1) ◽  
pp. 83
Author(s):  
Yuri Broze

<p>Theoretical and methodological perspectives are offered. Perceived personal involvement, variety of improvisational tools, and dramaturgy might be attributed to multimodal signals and cues of emotion, implicit learning of motor routines and musical tendency, and deliberate planning on the part of the musician. I give a personal perspective on my experience as an improvising musician, and suggest a sketch of how I imagine improvisation often works. Dramaturgical models meant for iteratively constructed artworks such as plays are likely to be deficient for improvisational artworks in general. Finally, the authors' methodological choices are considered. Both exploratory research and model selection are also driven by implicit hypotheses, so care must be taken to minimize false discovery.</p>


Author(s):  
Gaojian Huang ◽  
Clayton Steele ◽  
Xinrui Zhang ◽  
Brandon J. Pitts

The rapid growth of autonomous vehicles is expected to improve roadway safety. However, certain levels of vehicle automation will still require drivers to ‘takeover’ during abnormal situations, which may lead to breakdowns in driver-vehicle interactions. To date, there is no agreement on how to best support drivers in accomplishing a takeover task. Therefore, the goal of this study was to investigate the effectiveness of multimodal alerts as a feasible approach. In particular, we examined the effects of uni-, bi-, and trimodal combinations of visual, auditory, and tactile cues on response times to takeover alerts. Sixteen participants were asked to detect 7 multimodal signals (i.e., visual, auditory, tactile, visual-auditory, visual-tactile, auditory-tactile, and visual-auditory-tactile) while driving under two conditions: with SAE Level 3 automation only or with SAE Level 3 automation in addition to performing a road sign detection task. Performance on the signal and road sign detection tasks, pupil size, and perceived workload were measured. Findings indicate that trimodal combinations result in the shortest response time. Also, response times were longer and perceived workload was higher when participants were engaged in a secondary task. Findings may contribute to the development of theory regarding the design of takeover request alert systems within (semi) autonomous vehicles.


2019 ◽  
Vol 154 ◽  
pp. 57-63 ◽  
Author(s):  
Zhen Liu ◽  
Qingbo He ◽  
Shiqian Chen ◽  
Xingjian Dong ◽  
Zhike Peng ◽  
...  

2010 ◽  
Vol 56 (3) ◽  
pp. 313-326 ◽  
Author(s):  
Sarah R. Partan ◽  
Andrew G. Fulmer ◽  
Maya A. M. Gounard ◽  
Jake E. Redmond

Abstract Urbanization of animal habitats has the potential to affect the natural communication systems of any species able to survive in the changed environment. Urban animals such as squirrels use multiple signal channels to communicate, but it is unknown how urbanization has affected these behaviors. Multimodal communication, involving more than one sensory modality, can be studied by use of biomimetic mechanical animal models that are designed to simulate the multimodal signals and be presented to animal subjects in the field. In this way the responses to the various signal components can be compared and contrasted to determine whether the multimodal signal is made up of redundant or nonredundant components. In this study, we presented wild gray squirrels in relatively urban and relatively rural habitats in Western Massachusetts with a biomimetic squirrel model that produced tail flags and alarm barks in a variety of combinations. We found that the squirrels responded to each unimodal component on its own, the bark and tail flag, but they responded most to the complete multimodal signal, containing both the acoustic and the moving visual components, providing evidence that in this context the signal components are redundant and that their combination elicits multimodal enhancement. We expanded on the results of Partan et al. (2009) by providing data on signaling behavior in the presence and absence of conspecifics, suggesting that alarm signaling is more likely if conspecifics are present. We found that the squirrels were more active in the urban habitats and that they responded more to tail flagging in the urban habitats as compared to the rural ones, suggesting the interesting possibility of a multimodal shift from reliance on audio to visual signals in noisier more crowded urban habitats.


Behaviour ◽  
2013 ◽  
Vol 150 (12) ◽  
pp. 1467-1489 ◽  
Author(s):  
Arielle Duhaime-Ross ◽  
Geneviève Martel ◽  
Frédéric Laberge

Many animals use and react to multimodal signals — signals that occur in more than one sensory modality. This study focused on the respective roles of vision, chemoreception, and their possible interaction in determining agonistic responses of the red-backed salamander, Plethodon cinereus. The use of a computer display allowed separate or combined presentation of visual and chemical cues. A cue isolation experiment using adult male and juvenile salamanders showed that both visual and chemical cues from unfamiliar male conspecifics could increase aggressive displays. Submissive displays were only increased in juveniles, and specifically by the visual cue. The rate of chemoinvestigation of the substrate was increased only by chemical cues in adults, whereas both chemical and visual cues increased this behaviour in juveniles. Chemoinvestigation appears, thus, more dependent on sensory input in juvenile salamanders. A follow-up experiment comparing responses to visual cues of different animals (conspecific salamander, heterospecific salamander and earthworm) or an inanimate object (wood stick) showed that exploratory behaviour was higher in the presence of the inanimate object stimulus. The heterospecific salamander stimulus produced strong submissive and escape responses, while the conspecific salamander stimulus promoted aggressive displays. Finally, the earthworm stimulus increased both aggressive and submissive behaviours at intermediate levels when compared to salamander cues. These specific combinations of agonistic and exploratory responses to each stimulus suggest that salamanders could discriminate the cues visually. This study sheds some light on how information from different sensory modalities guides social behaviour at different life stages in a salamander.


PLoS ONE ◽  
2013 ◽  
Vol 8 (1) ◽  
pp. e55367 ◽  
Author(s):  
Doris Preininger ◽  
Markus Boeckle ◽  
Marc Sztatecsny ◽  
Walter Hödl

Sign in / Sign up

Export Citation Format

Share Document