auditory word recognition
Recently Published Documents


TOTAL DOCUMENTS

104
(FIVE YEARS 13)

H-INDEX

26
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Chuanji Gao ◽  
Svetlana V. Shinkareva ◽  
Marius Peelen

Recognizing written or spoken words involves a sequence of processing stages, transforming sensory features into lexical-semantic representations. While the later processing stages are common across modalities, the initial stages are modality-specific. In the visual modality, previous studies have shown that words with positive valence are recognized faster than neutral words. Here, we examined whether the effects of valence on word recognition are specific to the visual modality or are common across visual and auditory modalities. To address this question, we analyzed multiple large databases of visual and auditory lexical decision tasks, relating the valence of words to lexical decision times while controlling for a large number of variables, including arousal and frequency. We found that valence differentially influenced visual and auditory word recognition. Valence had an asymmetric effect on visual lexical decision times, primarily speeding up recognition of positive words. By contrast, valence had a symmetric effect on auditory lexical decision times, with both negative and positive words speeding up word recognition relative to neutral words. The modality-specificity of valence effects were consistent across databases and were observed when the same set of words were compared across modalities. We interpret these findings as indicating that valence influences word recognition partly at the sensory-perceptual stage. We relate these effects to the effects of positive (reward) and negative (punishment) reinforcers on perception.


2021 ◽  
Author(s):  
Alberto Furgoni ◽  
Antje Stoehr ◽  
Clara D. Martin

PurposeIn languages with alphabetical writing systems, the relationship between phonology and orthography is strong. Phonology-to-orthography mappings can be consistent (i.e., one phonological unit corresponds to one orthographic unit) or inconsistent (i.e., one phonological unit corresponds to multiple orthographic units). This study investigates whether the Orthographic Consistency Effect (OCE) emerges at the phonemic level during auditory word recognition, regardless of the opacity of a language’s writing system.MethodsThirty L1-French (opaque language) and 30 L1-Spanish (transparent language) listeners participated in an L1 auditory lexical decision task which included stimuli with either only consistently-spelled phonemes or both consistently-spelled and a number of inconsistently-spelled phonemes. ResultsThe results revealed that listeners were faster at recognizing consistently-spelled words than inconsistently-spelled words. This implies that consistently-spelled words are recognized more easily than inconsistent ones. As for pseudoword processing, there is a numerical trend that might indicate a higher sensibility of French listeners to phoneme-to-grapheme inconsistencies. ConclusionsThese findings have theoretical implications: inconsistent phoneme-to-grapheme mappings, like inconsistencies at the level of the syllable or rhyme, impact auditory word recognition. Moreover, our results suggest that the OCE should occur in all languages with alphabetical writing systems, regardless of their level of orthographic opacity.


2020 ◽  
Author(s):  
Phoebe Gaston ◽  
Ellen Lau ◽  
Colin Phillips

Better understanding of word recognition requires a detailed account of how top-down and bottom-up information are integrated. In this paper, we use a combination of modeling and experimental work to investigate the mechanism by which expectations from syntactic context influence the processing of perceptual input during word recognition. The distinction between facilitatory and inhibitory mechanisms for the syntactic category constraint is an important aspect of this problem that has previously been underspecified, and syntactic category is a relatively simple test case for the issue of context in other domains. We first report simulations in jTRACE that point to an explanation for conflicts that have occurred between different methods regarding the existence and timing of syntactic constraints on lexical cohort competition. We show that the composition of the set of response candidates allowed by the task is predicted to influence whether and when changes in lexical activation can be observed in dependent measures, which is relevant for the design and interpretation of experiments involving cohort competition more broadly. These insights informed a new design for the visual world paradigm that distinguishes facilitatory and inhibitory mechanisms and ensures that activation of words from the wrong syntactic category should be detectable if it is occurring. We demonstrate how failure to ensure this could have obscured such activation in previous work, leading to the appearance of an inhibitory constraint. We find that wrong-category competition does occur, a result that is incompatible with an inhibitory syntactic category constraint.


2020 ◽  
Author(s):  
Elnaz Shafaei-Bajestan ◽  
Masoumeh Moradipour-Tari ◽  
Peter Uhrig ◽  
R. H. Baayen

A computational model for auditory word recognition is presented that enhances the model of Arnold et al. (2017). Real-valued features are extracted from the speech signal instead of discrete features. One-hot encoding for words’ meanings is replaced by real-valued semantic vectors, adding a small amount of noise to safeguard discriminability. Instead of learning with Rescorla-Wagner updating, we use multivariate multiple regression, which captures discrimination learning at the limit of experience. These new design features substantially improve prediction accuracy for words extracted from spontaneous conversations. They also provide enhanced temporal granularity, enabling the modeling of cohort-like effects. Clustering with t-SNE shows that the acoustic form space captures phone-like similarities and differences. Thus, wide learning with high-dimensional vectors and no hidden layers, and no abstract mediating phone-like representations is not only possible but achieves excellent performance that approximates the lower bound of human accuracy on the challenging task of isolated word recognition.


2020 ◽  
Vol 23 (5) ◽  
pp. 1082-1092
Author(s):  
Sara Guediche ◽  
Martijn Baart ◽  
Arthur G. Samuel

AbstractThe current study investigates how second language auditory word recognition, in early and highly proficient Spanish–Basque (L1-L2) bilinguals, is influenced by crosslinguistic phonological-lexical interactions and semantic priming. Phonological overlap between a word and its translation equivalent (phonological cognate status), and semantic relatedness of a preceding prime were manipulated. Experiment 1 examined word recognition performance in noisy listening conditions that introduce a high degree of uncertainty, whereas Experiment 2 employed clear listening conditions, with low uncertainty. Under noisy listening conditions, semantic priming effects interacted with phonological cognate status: for word recognition accuracy, a related prime overcame inhibitory effects of phonological overlap between target words and their translations. These findings are consistent with models of bilingual word recognition that incorporate crosslinguistic phonological-lexical-semantic interactions. Moreover, they suggest an interplay between L2-L1 interactions and the integration of information across acoustic and semantic levels of processing in flexibly mapping the speech signal onto the spoken words, under adverse listening conditions.


Sign in / Sign up

Export Citation Format

Share Document