audiovisual perception
Recently Published Documents


TOTAL DOCUMENTS

55
(FIVE YEARS 13)

H-INDEX

10
(FIVE YEARS 2)

2021 ◽  
Vol 15 ◽  
Author(s):  
Ute Korn ◽  
Marina Krylova ◽  
Kilian L. Heck ◽  
Florian B. Häußinger ◽  
Robert S. Stark ◽  
...  

Processing of sensory information is embedded into ongoing neural processes which contribute to brain states. Electroencephalographic microstates are semi-stable short-lived power distributions which have been associated with subsystem activity such as auditory, visual and attention networks. Here we explore changes in electrical brain states in response to an audiovisual perception and memorization task under conditions of auditory distraction. We discovered changes in brain microstates reflecting a weakening of states representing activity of the auditory system and strengthening of salience networks, supporting the idea that salience networks are active after audiovisual encoding and during memorization to protect memories and concentrate on upcoming behavioural response.


2021 ◽  
Vol 17 (11) ◽  
pp. e1008877
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

To obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying visual reliability. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During an audiovisual recalibration phase, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the audiovisual recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, peaked at medium visual reliability, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.


2021 ◽  
Author(s):  
Fangfang Hong ◽  
Stephanie Badde ◽  
Michael S. Landy

AbstractTo obtain a coherent perception of the world, our senses need to be in alignment. When we encounter misaligned cues from two sensory modalities, the brain must infer which cue is faulty and recalibrate the corresponding sense. We examined whether and how the brain uses cue reliability to identify the miscalibrated sense by measuring the audiovisual ventriloquism aftereffect for stimuli of varying reliability. Visual spatial reliability was smaller, comparable to and greater than that of auditory stimuli. To adjust for modality-specific biases, visual stimulus locations were chosen based on perceived alignment with auditory stimulus locations for each participant. During audiovisual recalibration, participants were presented with bimodal stimuli with a fixed perceptual spatial discrepancy; they localized one modality, cued after stimulus presentation. Unimodal auditory and visual localization was measured before and after the recalibration phase. We compared participants’ behavior to the predictions of three models of recalibration: (a) Reliability-based: each modality is recalibrated based on its relative reliability—less reliable cues are recalibrated more; (b) Fixed-ratio: the degree of recalibration for each modality is fixed; (c) Causal-inference: recalibration is directly determined by the discrepancy between a cue and its final estimate, which in turn depends on the reliability of both cues, and inference about how likely the two cues derive from a common source. Vision was hardly recalibrated by audition. Auditory recalibration by vision changed idiosyncratically as visual reliability decreased: the extent of auditory recalibration either decreased monotonically, first increased and then decreased, or increased monotonically. The latter two patterns cannot be explained by either the reliability-based or fixed-ratio models. Only the causal-inference model of recalibration captures the idiosyncratic influences of cue reliability on recalibration. We conclude that cue reliability, causal inference, and modality-specific biases guide cross-modal recalibration indirectly by determining the perception of audiovisual stimuli.Author summaryAudiovisual recalibration of spatial perception occurs when we receive audiovisual stimuli with a systematic spatial discrepancy. The brain must determine to which extent both modalities should be recalibrated. In this study, we scrutinized the mechanisms the brain employs to do so. To this aim, we conducted a classical recalibration task in which participants were adapted to spatially discrepant audiovisual stimuli. The visual component of the bimodal stimulus was either less, equally, or more reliable than the auditory component. We measured the amount of recalibration by computing the difference between participants’ unimodal localization responses before and after the recalibration task. Across participants, the influence of visual reliability on auditory recalibration varied fundamentally. We compared three models of recalibration. Only a causal-inference model of recalibration captured the diverse influences of cue reliability on recalibration found in our study, and this model is able to replicate contradictory results found in previous studies. In this model, recalibration depends on the discrepancy between a cue and its final estimate. Cue reliability, perceptual biases, and the degree to which participants infer that the two cues come from a common source govern audiovisual perception and therefore audiovisual recalibration.


2021 ◽  
Vol 11 ◽  
Author(s):  
Shoji Tanaka

Opera is a performing art in which music plays the leading role, and the acting of singers has a synergistic effect with the music. The mirror neuron system represents the neurophysiological mechanism underlying the coupling of perception and action. Mirror neuron activity is modulated by the appropriateness of actions and clarity of intentions, as well as emotional expression and aesthetic values. Therefore, it would be reasonable to assume that an opera performance induces mirror neuron activity in the audience so that the performer effectively shares an embodied performance with the audience. However, it is uncertain which aspect of opera performance induces mirror neuron activity. It is hypothesized that although auditory stimuli could induce mirror neuron activity, audiovisual perception of stage performance is the primary inducer of mirror neuron activity. To test this hypothesis, this study sought to correlate opera performance with brain activity as measured by electroencephalography (EEG) in singers while watching an opera performance with sounds or while listening to an aria without visual stimulus. We detected mirror neuron activity by observing that the EEG power in the alpha frequency band (8–13 Hz) was selectively decreased in the frontal-central-parietal area when watching an opera performance. In the auditory condition, however, the alpha-band power did not change relative to the resting condition. This study illustrates that the audiovisual perception of an opera performance engages the mirror neuron system in its audience.


2021 ◽  
Vol 11 ◽  
Author(s):  
Marzieh Sorati ◽  
Dawn M. Behne

Previous research with speech and non-speech stimuli suggested that in audiovisual perception, visual information starting prior to the onset of corresponding sound can provide visual cues, and form a prediction about the upcoming auditory sound. This prediction leads to audiovisual (AV) interaction. Auditory and visual perception interact and induce suppression and speeding up of the early auditory event-related potentials (ERPs) such as N1 and P2. To investigate AV interaction, previous research examined N1 and P2 amplitudes and latencies in response to audio only (AO), video only (VO), audiovisual, and control (CO) stimuli, and compared AV with auditory perception based on four AV interaction models (AV vs. AO+VO, AV-VO vs. AO, AV-VO vs. AO-CO, AV vs. AO). The current study addresses how different models of AV interaction express N1 and P2 suppression in music perception. Furthermore, the current study took one step further and examined whether previous musical experience, which can potentially lead to higher N1 and P2 amplitudes in auditory perception, influenced AV interaction in different models. Musicians and non-musicians were presented the recordings (AO, AV, VO) of a keyboard /C4/ key being played, as well as CO stimuli. Results showed that AV interaction models differ in their expression of N1 and P2 amplitude and latency suppression. The calculation of model (AV-VO vs. AO) and (AV-VO vs. AO-CO) has consequences for the resulting N1 and P2 difference waves. Furthermore, while musicians, compared to non-musicians, showed higher N1 amplitude in auditory perception, suppression of amplitudes and latencies for N1 and P2 was similar for the two groups across the AV models. Collectively, these results suggest that when visual cues from finger and hand movements predict the upcoming sound in AV music perception, suppression of early ERPs is similar for musicians and non-musicians. Notably, the calculation differences across models do not lead to the same pattern of results for N1 and P2, demonstrating that the four models are not interchangeable and are not directly comparable.


Author(s):  
Pelin Pekmezci ◽  
Hülya Öztop

Elderly education helps to prevent cognitive regression in the areas of memory, attention, audiovisual perception, language-usage ability, and behavior-management skills. Education provides elderly people with self-confidence and independence; it helps them more effectively cope with changing environmental conditions, increases their potential to contribute to society, and gives them opportunities to share their experiences with their and with individuals of younger generations. This study used a review of the literature to examine the place of elderly education within adult education by focusing on how resources are developed, as well as focusing on the development of a proper scope of these resources. Accordingly, this study aimed to determine the following: the extent to which elderly education is affected by social change, those factors to be considered while planning elderly education, and the areas and subjects in which elderly people need education.


2020 ◽  
Vol 40 (34) ◽  
pp. 6600-6612 ◽  
Author(s):  
Agoston Mihalik ◽  
Uta Noppeney

2020 ◽  
Vol 29 ◽  
Author(s):  
Xinyue Wang ◽  
Clemens Wöllner

The current review addresses two internal clock models that have dominated discussions in timing research for the last decades. More specifically, it discusses whether the central or the intrinsic clock model better describes the fluctuations in subjective time. Identifying the timing mechanism is critical to explain and predict timing behaviours in various audiovisual contexts. Music stands out for its prominence in real life scenarios along with its great potential to alter subjective time. An emphasis on how music as a complex dynamic auditory signal affects timing accuracy led us to examine the behavioural and neuropsychological evidence that supports either clock model. In addition to the timing mechanisms, an overview of internal and external variables, such as attention and emotions as well as the classic experimental paradigms is provided, in order to examine how the mechanisms function in response to changes occurring particularly during music experiences. Neither model can explain the effects of music on subjective timing entirely: The intrinsic model applies primarily to subsecond timing, whereas the central model applies to the suprasecond range. In order to explain time experiences in music, one has to consider the target intervals as well as the contextual factors mentioned above. Further research is needed to reconcile the gap between theories, and suggestions for future empirical studies are outlined.


2020 ◽  
Author(s):  
Ahmad Yousef

We had shown that deep breathing had been able to effectively and timely alter visual and auditory bistable perception, see reference 1, 2. Deep breathing requires cognitive control, and therefore, in this study, we decide to investigate whether voluntary movements of human hands are able to govern the audiovisual perception using an integrative stimulus that’s built up with the aforementioned visual and auditory stimuli. Astoundingly, when the human subjects moves the pen towards the actual physical direction, even without touching the screen; the original materials of the audiovisual stimulus appear. Reversed perception, namely, illusory motion reversals and illusory word appear when the pen is moved in the opposite direction of the actual motion. Cognitive actions’ brain areas, namely, dorsolateral prefrontal cortex, premotor cortex, and primary motor cortex may require high concentration of oxygenated hobgoblin red blood cells to achieve fulsome executive movements; and this could results in significant reduction of the concentrations of the oxygenated hobgoblin red blood cells in the visual and auditory cortices. Reductions that disallow one of two; the central versus the peripheral conscious brains dedicated for audiovisual perceptions, to rapidly alternate their conscious productions; and therefore, stoppage against bistable audiovisual perception will occur. We thus hypothesis that the DLPFC may send signals to deactivate the peripheral areas in the sensory brain regions when the cognitive action is harmonized with the actual material; but it may send a contrary signal to deactivate the central areas in the sensory brain regions when the cognitive action and the actual material are disharmonized.


2020 ◽  
Author(s):  
Ahmad Yousef

Scientists had shown that visual rivalries, whether it’s monocular or binocular have highly correlated temporal patterns of oscillations, see reference 1. Voluntary movements of the hand, might be scientifically consider as neurophysiological top-down processes can perfectly and timely alter the bistable audiovisual perception, see reference 2. In this study, therefore, we decide to investigate whether the voluntary movements of the limbs, namely, drawing by hands, can elongate the dominance duration of the dominate view. Astoundingly, we found that as long as the hand moves in the direction that’s correlated with the dominant view, its dominance duration can be significantly elongated.


Sign in / Sign up

Export Citation Format

Share Document