audiovisual stimuli
Recently Published Documents


TOTAL DOCUMENTS

173
(FIVE YEARS 66)

H-INDEX

18
(FIVE YEARS 3)

Diagnostics ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. 125
Author(s):  
Alexandr Y. Petukhov ◽  
Sofia A. Polevaya ◽  
Anna V. Polevaya

In this paper, we study ways and methods to diagnose the emotional state of individuals using external audiovisual stimuli and heart telemetry tools. We apply a mathematical model of neurocognitive brain activity developed specifically for this study to interpret the experimental scheme and its results. This experimental technique is based on monitoring and analyzing the dynamics of heart rate variability (HRV), taking into account the particular context and events occurring around the subject of the study. In addition, we provide a brief description of the theory of information images/representations used for the paradigm and interpretation of the experiment. For this study, we viewed the human mind as a one-dimensional potential hole with finite walls of different sizes and an internal potential barrier modeling the border between consciousness and subconsciousness. We also provided the foundations of the mathematical apparatus for this particular view. This experiment allowed us to identify the characteristic markers of influencing external stimuli, which form a foundation for diagnosing the emotional state of an individual.


2021 ◽  
Vol 15 ◽  
Author(s):  
Omer Ashmaig ◽  
Liberty S. Hamilton ◽  
Pradeep Modur ◽  
Robert J. Buchanan ◽  
Alison R. Preston ◽  
...  

Intracranial recordings in epilepsy patients are increasingly utilized to gain insight into the electrophysiological mechanisms of human cognition. There are currently several practical limitations to conducting research with these patients, including patient and researcher availability and the cognitive abilities of patients, which limit the amount of task-related data that can be collected. Prior studies have synchronized clinical audio, video, and neural recordings to understand naturalistic behaviors, but these recordings are centered on the patient to understand their seizure semiology and thus do not capture and synchronize audiovisual stimuli experienced by patients. Here, we describe a platform for cognitive monitoring of neurosurgical patients during their hospitalization that benefits both patients and researchers. We provide the full specifications for this system and describe some example use cases in perception, memory, and sleep research. We provide results obtained from a patient passively watching TV as proof-of-principle for the naturalistic study of cognition. Our system opens up new avenues to collect more data per patient using real-world behaviors, affording new possibilities to conduct longitudinal studies of the electrophysiological basis of human cognition under naturalistic conditions.


2021 ◽  
Vol 17 (11) ◽  
pp. e1008877
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.


2021 ◽  
pp. 1-19
Author(s):  
Alexandra N. Scurry ◽  
Daniela M. Lemus ◽  
Fang Jiang

Abstract Reliable duration perception is an integral aspect of daily life that impacts everyday perception, motor coordination, and subjective passage of time. The Scalar Expectancy Theory (SET) is a common model that explains how an internal pacemaker, gated by an external stimulus-driven switch, accumulates pulses during sensory events and compares these accumulated pulses to a reference memory duration for subsequent duration estimation. Second-order mechanisms, such as multisensory integration (MSI) and attention, can influence this model and affect duration perception. For instance, diverting attention away from temporal features could delay the switch closure or temporarily open the accumulator, altering pulse accumulation and distorting duration perception. In crossmodal duration perception, auditory signals of unequal duration can induce perceptual compression and expansion of durations of visual stimuli, presumably via auditory influence on the visual clock. The current project aimed to investigate the role of temporal (stimulus alignment) and nontemporal (stimulus complexity) features on crossmodal, specifically auditory over visual, duration perception. While temporal alignment revealed a larger impact on the strength of crossmodal duration percepts compared to stimulus complexity, both features showcase auditory dominance in processing visual duration.


2021 ◽  
Vol 2021 ◽  
pp. 1-17
Author(s):  
Noor Kamal Al-Qazzaz ◽  
Mohannad K. Sabir ◽  
Sawal Hamid Bin Mohd Ali ◽  
Siti Anom Ahmad ◽  
Karl Grammer

Investigating gender differences based on emotional changes becomes essential to understand various human behaviors in our daily life. Ten students from the University of Vienna have been recruited by recording the electroencephalogram (EEG) dataset while watching four short emotional video clips (anger, happiness, sadness, and neutral) of audiovisual stimuli. In this study, conventional filter and wavelet (WT) denoising techniques were applied as a preprocessing stage and Hurst exponent Hur and amplitude-aware permutation entropy AAPE features were extracted from the EEG dataset. k -nearest neighbors kNN and support vector machine (SVM) classification techniques were considered for automatic gender recognition from emotional-based EEGs. The main novelty of this paper is twofold: first, to investigate Hur as a complexity feature and AAPE as an irregularity parameter for the emotional-based EEGs using two-way analysis of variance (ANOVA) and then integrating these features to propose a new CompEn hybrid feature fusion method towards developing the novel WT _ CompEn gender recognition framework as a core for an automated gender recognition model to be sensitive for identifying gender roles in the brain-emotion relationship for females and males. The results illustrated the effectiveness of Hur and AAPE features as remarkable indices for investigating gender-based anger, sadness, happiness, and neutral emotional state. Moreover, the proposed WT _ CompEn framework achieved significant enhancement in SVM classification accuracy of 100%, indicating that the novel WT _ CompEn may offer a useful way for reliable enhancement of gender recognition of different emotional states. Therefore, the novel WT _ CompEn framework is a crucial goal for improving the process of automatic gender recognition from emotional-based EEG signals allowing for more comprehensive insights to understand various gender differences and human behavior effects of an intervention on the brain.


2021 ◽  
pp. JN-RM-2891-20
Author(s):  
Maansi Desai ◽  
Jade Holder ◽  
Cassandra Villarreal ◽  
Nat Clark ◽  
Brittany Hoang ◽  
...  
Keyword(s):  

2021 ◽  
Vol 15 ◽  
Author(s):  
Isma Zulfiqar ◽  
Michelle Moerel ◽  
Agustin Lage-Castellanos ◽  
Elia Formisano ◽  
Peter De Weerd

Recent studies have highlighted the possible contributions of direct connectivity between early sensory cortices to audiovisual integration. Anatomical connections between the early auditory and visual cortices are concentrated in visual sites representing the peripheral field of view. Here, we aimed to engage early sensory interactive pathways with simple, far-peripheral audiovisual stimuli (auditory noise and visual gratings). Using a modulation detection task in one modality performed at an 84% correct threshold level, we investigated multisensory interactions by simultaneously presenting weak stimuli from the other modality in which the temporal modulation was barely-detectable (at 55 and 65% correct detection performance). Furthermore, we manipulated the temporal congruence between the cross-sensory streams. We found evidence for an influence of barely-detectable visual stimuli on the response times for auditory stimuli, but not for the reverse effect. These visual-to-auditory influences only occurred for specific phase-differences (at onset) between the modulated audiovisual stimuli. We discuss our findings in the light of a possible role of direct interactions between early visual and auditory areas, along with contributions from the higher-order association cortex. In sum, our results extend the behavioral evidence of audio-visual processing to the far periphery, and suggest – within this specific experimental setting – an asymmetry between the auditory influence on visual processing and the visual influence on auditory processing.


2021 ◽  
pp. 18-30
Author(s):  
Mathias Clasen

Most horror films contain several jump scares, which are sudden audiovisual stimuli that elicit a startle response. Many people who are nervous about horror films point to the jump scare as a dreaded element. The jump scare usually follows a predictable formula, but jump scares can be complex and artful, and the science of the startle response reveals an ancient defensive system designed by evolution for survival. While it is difficult to protect oneself from the jump scare, the chapter offers science-based advice on how to attenuate its effect—including coping strategies—and suggests some horror movies with low jump-scare frequency.


2021 ◽  
pp. 1-29
Author(s):  
Jie Wu ◽  
Qitian Li ◽  
Qiufang Fu ◽  
Michael Rose ◽  
Liping Jing

Abstract Although it has been demonstrated that multisensory information can facilitate object recognition and object memory, it remains unclear whether such facilitation effect exists in category learning. To address this issue, comparable car images and sounds were first selected by a discrimination task in Experiment 1. Then, those selected images and sounds were utilized in a prototype category learning task in Experiments 2 and 3, in which participants were trained with auditory, visual, and audiovisual stimuli, and were tested with trained or untrained stimuli within the same categories presented alone or accompanied with a congruent or incongruent stimulus in the other modality. In Experiment 2, when low-distortion stimuli (more similar to the prototypes) were trained, there was higher accuracy for audiovisual trials than visual trials, but no significant difference between audiovisual and auditory trials. During testing, accuracy was significantly higher for congruent trials than unisensory or incongruent trials, and the congruency effect was larger for untrained high-distortion stimuli than trained low-distortion stimuli. In Experiment 3, when high-distortion stimuli (less similar to the prototypes) were trained, there was higher accuracy for audiovisual trials than visual or auditory trials, and the congruency effect was larger for trained high-distortion stimuli than untrained low-distortion stimuli during testing. These findings demonstrated that higher degree of stimuli distortion resulted in more robust multisensory effect, and the categorization of not only trained but also untrained stimuli in one modality could be influenced by an accompanying stimulus in the other modality.


Sign in / Sign up

Export Citation Format

Share Document