scholarly journals Affective Valence of Words Differentially Affects Visual and Auditory Word Recognition

2021 ◽  
Author(s):  
Chuanji Gao ◽  
Svetlana V. Shinkareva ◽  
Marius Peelen

Recognizing written or spoken words involves a sequence of processing stages, transforming sensory features into lexical-semantic representations. While the later processing stages are common across modalities, the initial stages are modality-specific. In the visual modality, previous studies have shown that words with positive valence are recognized faster than neutral words. Here, we examined whether the effects of valence on word recognition are specific to the visual modality or are common across visual and auditory modalities. To address this question, we analyzed multiple large databases of visual and auditory lexical decision tasks, relating the valence of words to lexical decision times while controlling for a large number of variables, including arousal and frequency. We found that valence differentially influenced visual and auditory word recognition. Valence had an asymmetric effect on visual lexical decision times, primarily speeding up recognition of positive words. By contrast, valence had a symmetric effect on auditory lexical decision times, with both negative and positive words speeding up word recognition relative to neutral words. The modality-specificity of valence effects were consistent across databases and were observed when the same set of words were compared across modalities. We interpret these findings as indicating that valence influences word recognition partly at the sensory-perceptual stage. We relate these effects to the effects of positive (reward) and negative (punishment) reinforcers on perception.

2021 ◽  
Author(s):  
Alberto Furgoni ◽  
Antje Stoehr ◽  
Clara D. Martin

PurposeIn languages with alphabetical writing systems, the relationship between phonology and orthography is strong. Phonology-to-orthography mappings can be consistent (i.e., one phonological unit corresponds to one orthographic unit) or inconsistent (i.e., one phonological unit corresponds to multiple orthographic units). This study investigates whether the Orthographic Consistency Effect (OCE) emerges at the phonemic level during auditory word recognition, regardless of the opacity of a language’s writing system.MethodsThirty L1-French (opaque language) and 30 L1-Spanish (transparent language) listeners participated in an L1 auditory lexical decision task which included stimuli with either only consistently-spelled phonemes or both consistently-spelled and a number of inconsistently-spelled phonemes. ResultsThe results revealed that listeners were faster at recognizing consistently-spelled words than inconsistently-spelled words. This implies that consistently-spelled words are recognized more easily than inconsistent ones. As for pseudoword processing, there is a numerical trend that might indicate a higher sensibility of French listeners to phoneme-to-grapheme inconsistencies. ConclusionsThese findings have theoretical implications: inconsistent phoneme-to-grapheme mappings, like inconsistencies at the level of the syllable or rhyme, impact auditory word recognition. Moreover, our results suggest that the OCE should occur in all languages with alphabetical writing systems, regardless of their level of orthographic opacity.


2012 ◽  
Vol 16 (3) ◽  
pp. 508-517 ◽  
Author(s):  
EVELYNE LAGROU ◽  
ROBERT J. HARTSUIKER ◽  
WOUTER DUYCK

Until now, research on bilingual auditory word recognition has been scarce, and although most studies agree that lexical access is language-nonselective, there is less consensus with respect to the influence of potentially constraining factors. The present study investigated the influence of three possible constraints. We tested whether language nonselectivity is restricted by (a) a sentence context in a second language (L2), (b) the semantic constraint of the sentence, and (c) the native language of the speaker. Dutch–English bilinguals completed an English auditory lexical decision task on the last word of low- and high-constraining sentences. Sentences were pronounced by a native Dutch speaker with English as the L2, or by a native English speaker with Dutch as the L2. Interlingual homophones (e.g., lief “sweet” – leaf /liːf/) were always recognized more slowly than control words. The semantic constraint of the sentence and the native accent of the speaker modulated, but did not eliminate interlingual homophone effects. These results are discussed within language-nonselective models of lexical access in bilingual auditory word recognition.


1992 ◽  
Vol 22 (1) ◽  
pp. 10-16 ◽  
Author(s):  
Denise Klein ◽  
Estelle Ann Doctor

This study reports an experiment which examines semantic representation in lexical decisions as a source of interconnection between words in bilingual memory. Lexical decision times were compared for interlingual polysemes such as HAND which share spelling and meaning in both languages, and interlingual homographs such as KIND which share spelling but not meaning. The main result was faster “response times for polysemes than for interlingual homographs. Current theories of monolingual word recognition and bilingual semantic representation are discussed, and the findings are accommodated within the model of bilingual word recognition proposed by Doctor and Klein.


1999 ◽  
Vol 42 (3) ◽  
pp. 735-743 ◽  
Author(s):  
James W. Montgomery

In this study we examined the lexical mapping stage of auditory word recognition in children with specific language impairment (SLI). Twenty-one children with SLI, 21 children matched for chronological age (CM), and 21 vocabulary-matched (VM) children participated in a forward gating task in which they listened to successive temporal chunks of familiar monosyllabic nouns. After each gate, children guessed the identity of the word and provided a confidence rating of their word guess. Results revealed that the children with SLI performed comparably to the CM and VM children on all seven dependent measures related to lexical mapping. The findings were interpreted to suggest that children with SLI and their normally developing peers demonstrate a comparable lexical mapping phase (i.e., acoustic-phonetic analysis) of auditory word recognition.


Author(s):  
Béryl Schulpen ◽  
Ton Dijkstra ◽  
Herbert J. Schriefers ◽  
Mark Hasper

Sign in / Sign up

Export Citation Format

Share Document