semantic congruency
Recently Published Documents


TOTAL DOCUMENTS

39
(FIVE YEARS 7)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 12 ◽  
Author(s):  
Fei Li ◽  
Xiangfei Hong ◽  
Zhaoying He ◽  
Sixuan Wu ◽  
Chenyi Zhang

The aim of the present study was to investigate how Chinese-Malay bilingual speakers with Chinese as heritage language process semantic congruency effects in Chinese and how their brain activities compare to those of monolingual Chinese speakers using electroencephalography (EEG) recordings. To this end, semantic congruencies were manipulated in Chinese classifier-noun phrases, resulting in four conditions: (i) a strongly constraining/high-cloze, plausible (SP) condition, (ii) a weakly constraining/low-cloze, plausible (WP) condition, (iii) a strongly constraining/implausible (SI) condition, and (iv) a weakly constraining/implausible (WI) condition. The analysis of EEG data focused on two event-related potential components, i.e., the N400, which is known for its sensitivity to semantic fit of a target word to its context, and a post-N400 late positive complex (LPC), which is linked to semantic integration after prediction violations and retrospective, evaluative processes. We found similar N400/LPC effects in response to the manipulations of semantic congruency in the mono- and bilingual groups, with a gradient N400 pattern (WI/SI > WP > SP), a larger frontal LPC in response to WP compared to SP, SI, and WI, as well as larger centro-parietal LPCs in response to WP compared to SI and WI, and a larger centro-parietal LPC for SP compared to SI. These results suggest that, in terms of event-related potential (ERP) data, Chinese-Malay early bilingual speakers predict and integrate upcoming semantic information in Chinese classifier-noun phrase to the same extent as monolingual Chinese speakers. However, the global field power (GFP) data showed significant differences between SP and WP in the N400 and LPC time windows in bilinguals, whereas no such effects were observed in monolinguals. This finding was interpreted as showing that bilinguals differ from their monolingual peers in terms of global field power intensity of the brain by processing plausible classifier-noun pairs with different congruency effects.


2021 ◽  
Vol 152 ◽  
pp. 105758
Author(s):  
Giovanni Federico ◽  
François Osiurak ◽  
Emanuelle Reynaud ◽  
Maria A. Brandimonte

2021 ◽  
Author(s):  
Elyse G Letts ◽  
Aysha Basharat ◽  
Michael Barnett-Cowan

Previous studies demonstrate that semantics, the higher level meaning of multi-modal stimuli, can impact multisensory integration. Valence, an affective response to images, has not yet been tested in non-priming response time (RT) or temporal order judgement (TOJ) tasks. This study aims to investigate both semantic congruency and valence of non-speech audiovisual stimuli on multisensory integration via RT and TOJ tasks (assessing processing speed (RT), point of subjective simultaneity (PSS), and time-window when multisensory stimuli are likely to be perceived as simultaneous (Temporal Binding Window; TBW)). Forty participants (mean age: 26.25; females=17) were recruited from Prolific Academic resulting in 37 complete datasets. Both congruence and valence have a significant main effect on RT (congruent and high valence decrease RT) as well as an interaction effect (congruent/high valence condition being significantly faster than all others). For TOJ, images high in valence require visual stimuli to be presented significantly earlier than auditory stimuli in order for the audio and visual stimuli to be perceived as simultaneous. Further, a significant interaction effect of congruence and valence on the PSS revealed that the congruent/high valence condition was significantly earlier than all other conditions. A subsequent analysis shows there is a positive correlation between the TBW width (b-values) and RT (as the TBW widens, the RT increases) for the categories that differed most from 0 in their PSS (Congruent/High and Incongruent/Low). This study provides new evidence that supports previous research on semantic congruency and presents a novel incorporation of valence into behavioural responses.


2021 ◽  
Vol 28 (1) ◽  
Author(s):  
Guillermo Rodríguez-Martínez ◽  
◽  
Henry Castillo-Parra ◽  
Pedro J. Rosa ◽  
◽  
...  

Introduction: Multisensory audiovisual semantic congruency is the process by which visual information is perceived as integrated to auditory stimuli, because both coincide in terms of simultaneity and semantic correspondence. This study was aimed at establishing whether visual percepts, which semantically correspond to auditory stimuli, are associated with ocular fixations in modulating bottom-up areas while keeping a body posture alignment between the up-direction and the idiotropic axes, as well as in another orientation corresponding to a vectorial opposition between the up-direction and the head idiotropic axis. Method: Two groups (one for each position) were selected from a sample of 88 people. A bistable image was presented on a screen of a fixed 120 Hz eye-tracker device, providing background auditory stimuli so as to establish semantic congruencies and their relations to ocular fixations. Results: It was found that audiovisual semantic congruency is associated with fixations when idiotropic vectors are aligned with the up direction. Fixations manifested in bottom-up modulating areas are not associated with multisensory audiovisual semantic congruency when the head idiotropic vector is parallel with the gravity vector. Eye fixations decrease significantly if the head idiotropic axis is aligned with the gravity vector. Conclusion: It is concluded that body position can affect visual perceptual processes involved in the occurrence of semantic congruency.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Patrycja Delong ◽  
Uta Noppeney

AbstractInformation integration is considered a hallmark of human consciousness. Recent research has challenged this tenet by showing multisensory interactions in the absence of awareness. This psychophysics study assessed the impact of spatial and semantic correspondences on audiovisual binding in the presence and absence of visual awareness by combining forward–backward masking with spatial ventriloquism. Observers were presented with object pictures and synchronous sounds that were spatially and/or semantically congruent or incongruent. On each trial observers located the sound, identified the picture and rated the picture’s visibility. We observed a robust ventriloquist effect for subjectively visible and invisible pictures indicating that pictures that evade our perceptual awareness influence where we perceive sounds. Critically, semantic congruency enhanced these visual biases on perceived sound location only when the picture entered observers’ awareness. Our results demonstrate that crossmodal influences operating from vision to audition and vice versa are interactively controlled by spatial and semantic congruency in the presence of awareness. However, when visual processing is disrupted by masking procedures audiovisual interactions no longer depend on semantic correspondences.


i-Perception ◽  
2020 ◽  
Vol 11 (6) ◽  
pp. 204166952098109
Author(s):  
Qingqing Li ◽  
Qiong Wu ◽  
Yiyang Yu ◽  
Fengxia Wu ◽  
Satoshi Takahashi ◽  
...  

Attentional processes play a complex and multifaceted role in the integration of input from different sensory modalities. However, whether increased attentional load disrupts the audiovisual (AV) integration of common objects that involve semantic content remains unclear. Furthermore, knowledge regarding how semantic congruency interacts with attentional load to influence the AV integration of common objects is limited. We investigated these questions by examining AV integration under various attentional-load conditions. AV integration was assessed by adopting an animal identification task using unisensory (animal images and sounds) and AV stimuli (semantically congruent AV objects and semantically incongruent AV objects), while attentional load was manipulated by using a rapid serial visual presentation task. Our results indicate that attentional load did not attenuate the integration of semantically congruent AV objects. However, semantically incongruent animal sounds and images were not integrated (as there was no multisensory facilitation), and the interference effect produced by the semantically incongruent AV objects was reduced by increased attentional-load manipulations. These findings highlight the critical role of semantic congruency in modulating the effect of attentional load on the AV integration of common objects.


Cognition ◽  
2019 ◽  
Vol 190 ◽  
pp. 20-41 ◽  
Author(s):  
Carlo Fantoni ◽  
Giulio Baldassi ◽  
Sara Rigutti ◽  
Valter Prpic ◽  
Mauro Murgia ◽  
...  
Keyword(s):  

Author(s):  
Antonio Rei Fidalgo ◽  
Kohshe Takahashi ◽  
Aiko Murata ◽  
Katsumi Watanabe

Sign in / Sign up

Export Citation Format

Share Document