auditory lexical decision
Recently Published Documents


TOTAL DOCUMENTS

67
(FIVE YEARS 22)

H-INDEX

16
(FIVE YEARS 1)

2021 ◽  
Vol 16 (1) ◽  
pp. 23-48
Author(s):  
Filip Nenadić ◽  
Petar Milin ◽  
Benjamin V. Tucker

Abstract A multitude of studies show the relevance of both inflectional paradigms (word form frequency distributions, i.e., inflectional entropy) and inflectional classes (whole class frequency distributions) for visual lexical processing. Their interplay has also been proven significant, measured as the difference between paradigm and class frequency distributions (relative entropy). Relative entropy effects have now been recorded in nouns, verbs, adjectives, and prepositional phrases. However, all of these studies used visual stimuli – either written words or picture-naming tasks. The goal of our study is to test whether the effects of relative entropy can also be captured in the auditory modality. Forty young native speakers of Romanian (60% female) living in Serbia as part of the Romanian ethnic minority participated in an auditory lexical decision task. Stimuli were 168 Romanian verbs from two inflectional classes. Verbs were presented in four forms: present and imperfect 1st person singular, present 3rd person plural, and imperfect 2nd person plural. The results show that relative entropy influences both response accuracy and response latency. We discuss alternative operationalizations of relative entropy and how they can help us test hypotheses about the structure of the mental lexicon.


2021 ◽  
Vol 16 (1) ◽  
pp. 133-164
Author(s):  
Arne Lohmann ◽  
Benjamin V. Tucker

Abstract This article reports the results of an auditory lexical decision task, testing the processing of phonetic detail of English noun/verb conversion pairs. The article builds on recent findings showing that the frequent occurrence in certain prosodic environments may lead to the storage of prosody-induced phonetic detail as part of the lexical representation. To investigate this question with noun/verb conversion pairs, ambicategorical stimuli were used that exhibit systematic occurrence differences with regard to prosodic environment, as indicated by either a strong verb-bias, e.g., talk (N/V) or a strong noun-bias, e.g., voice (N/V). The auditory lexical decision task tests whether acoustic properties reflecting either the typical or the atypical prosodic environment impact the processing of recordings of the stimuli. In doing so assumptions about the storage of prosody-induced phonetic detail are tested that distinguish competing model architectures. The results are most straightforwardly accounted for within an abstractionist architecture, in which the acoustic signal is mapped onto a representation that is based on the canonical pronunciation of the word.


2021 ◽  
Author(s):  
Sophie Brand ◽  
Kimberley Mulder ◽  
Louis ten Bosch ◽  
Lou Boves

Author(s):  
Antje Stoehr ◽  
Clara D. Martin

Abstract Orthography plays a crucial role in L2 learning, which generally relies on both oral and written input. We examine whether incongruencies between L1 and L2 grapheme-phoneme correspondences influence bilingual speech perception and production, even when both languages have been acquired in early childhood before reading acquisition. Spanish–Basque and Basque–Spanish early bilinguals performed an auditory lexical decision task including Basque pseudowords created by replacing Basque /s̻/ with Spanish /θ/. These distinct phonemes take the same orthographic form, <z>. Participants also completed reading-aloud tasks in Basque and Spanish to test whether speech sounds with the same orthographic form were produced similarly in the two languages. Results for both groups showed orthography had strong effects on speech perception but no effects on speech production. Taken together, these findings suggest that orthography plays a crucial role in the speech system of early bilinguals but does not automatically lead to non-native production.


2021 ◽  
Author(s):  
Chuanji Gao ◽  
Svetlana V. Shinkareva ◽  
Marius Peelen

Recognizing written or spoken words involves a sequence of processing stages, transforming sensory features into lexical-semantic representations. While the later processing stages are common across modalities, the initial stages are modality-specific. In the visual modality, previous studies have shown that words with positive valence are recognized faster than neutral words. Here, we examined whether the effects of valence on word recognition are specific to the visual modality or are common across visual and auditory modalities. To address this question, we analyzed multiple large databases of visual and auditory lexical decision tasks, relating the valence of words to lexical decision times while controlling for a large number of variables, including arousal and frequency. We found that valence differentially influenced visual and auditory word recognition. Valence had an asymmetric effect on visual lexical decision times, primarily speeding up recognition of positive words. By contrast, valence had a symmetric effect on auditory lexical decision times, with both negative and positive words speeding up word recognition relative to neutral words. The modality-specificity of valence effects were consistent across databases and were observed when the same set of words were compared across modalities. We interpret these findings as indicating that valence influences word recognition partly at the sensory-perceptual stage. We relate these effects to the effects of positive (reward) and negative (punishment) reinforcers on perception.


2021 ◽  
Vol 6 ◽  
Author(s):  
Shannon Barrios ◽  
Rachel Hayes-Harb

Second language (L2) learners often exhibit difficulty perceiving novel phonological contrasts and/or using them to distinguish similar-sounding words. The auditory lexical decision (LD) task has emerged as a promising method to elicit the asymmetries in lexical processing performance that help to identify the locus of learners’ difficulty. However, LD tasks have been implemented and interpreted variably in the literature, complicating their utility in distinguishing between cases where learners’ difficulty lies at the level of perceptual and/or lexical coding. Building on previous work, we elaborate a set of LD ordinal accuracy predictions associated with various logically possible scenarios concerning the locus of learner difficulty, and provide new LD data involving multiple contrasts and native language (L1) groups. The inclusion of a native speaker control group allows us to isolate which patterns are unique to L2 learners, and the combination of multiple contrasts and L1 groups allows us to elicit evidence of various scenarios. We present findings of an experiment where native English, Korean, and Mandarin speakers completed an LD task that probed the robustness of listeners’ phonological representations of the English /æ/-/ɛ/ and /l/-/ɹ/ contrasts. Words contained the target phonemes, and nonwords were created by replacing the target phoneme with its counterpart (e.g., lecture/*[ɹ]ecture, battle/*b[ɛ]ttle). For the /æ/-/ɛ/ contrast, all three groups exhibited the same pattern of accuracy: near-ceiling acceptance of words and an asymmetric pattern of responses to nonwords, with higher accuracy for nonwords containing [æ] than [ɛ]. For the /l/-/ɹ/ contrast, we found three distinct accuracy patterns: native English speakers’ performance was highly accurate and symmetric for words and nonwords, native Mandarin speakers exhibited asymmetries favoring [l] items for words and nonwords (interpreted as evidence that they experienced difficulty at the perceptual coding level), and native Korean speakers exhibited asymmetries in opposite directions for words (favoring [l]) and nonwords (favoring [ɹ]; evidence of difficulty at the lexical coding level). Our findings suggest that the auditory LD task holds promise for determining the locus of learners’ difficulty with L2 contrasts; however, we raise several issues requiring attention to maximize its utility in investigating L2 phonolexical processing.


2021 ◽  
Author(s):  
Alberto Furgoni ◽  
Antje Stoehr ◽  
Clara D. Martin

PurposeIn languages with alphabetical writing systems, the relationship between phonology and orthography is strong. Phonology-to-orthography mappings can be consistent (i.e., one phonological unit corresponds to one orthographic unit) or inconsistent (i.e., one phonological unit corresponds to multiple orthographic units). This study investigates whether the Orthographic Consistency Effect (OCE) emerges at the phonemic level during auditory word recognition, regardless of the opacity of a language’s writing system.MethodsThirty L1-French (opaque language) and 30 L1-Spanish (transparent language) listeners participated in an L1 auditory lexical decision task which included stimuli with either only consistently-spelled phonemes or both consistently-spelled and a number of inconsistently-spelled phonemes. ResultsThe results revealed that listeners were faster at recognizing consistently-spelled words than inconsistently-spelled words. This implies that consistently-spelled words are recognized more easily than inconsistent ones. As for pseudoword processing, there is a numerical trend that might indicate a higher sensibility of French listeners to phoneme-to-grapheme inconsistencies. ConclusionsThese findings have theoretical implications: inconsistent phoneme-to-grapheme mappings, like inconsistencies at the level of the syllable or rhyme, impact auditory word recognition. Moreover, our results suggest that the OCE should occur in all languages with alphabetical writing systems, regardless of their level of orthographic opacity.


2021 ◽  
Author(s):  
Filip Nenadic ◽  
Benjamin V. Tucker ◽  
Petar Milin

A multitude of studies show the relevance of both inflectional paradigms (word form frequency distributions, i.e., inflectional entropy) and inflectional classes (whole class frequency distributions) for visual lexical processing. Their interplay has also been proven significant, measured as the difference between paradigm and class frequency distributions (relative entropy). Relative entropy effects have now been recorded in nouns, verbs, adjectives, and prepositional phrases. However, all of these studies used visual stimuli — either written words or picture-naming tasks. The goal of our study is to test whether the effects of relative entropy can be captured in the auditory modality as well. Forty young native speakers of Romanian (60% female) living in Serbia as part of the Romanian ethnic minority participated in an auditory lexical decision task. Stimuli were 168 Romanian verbs from two inflectional classes. Verbs were presented in four forms: present and imperfect 1st person singular, present 3rd person plural, and imperfect 2nd person plural. The results show that relative entropy influences both response accuracy and response latency. We discuss alternative operationalizations of relative entropy and how they can help us test hypotheses about the structure of the mental lexicon.


2021 ◽  
Vol 6 ◽  
Author(s):  
Monica Ghosh ◽  
John M. Levis

The use of suprasegmental cues to word stress occurs across many languages. Nevertheless, L1 English listeners' pay little attention to suprasegmental word stress cues and evidence shows that segmental cues are more important to L1 English listeners in how words are identified in speech. L1 English listeners assume strong syllables with full vowels mark the beginning of a new word, attempting alternative resegmentations only when this heuristic fails to identify a viable word string. English word stress errors have been shown to severely disrupt processing for both L1 and L2 listeners, but not all word stress errors are equally damaging. Vowel quality and direction of stress shift are thought to be predictors of the intelligibility of non-standard stress pronunciations—but most research so far on this topic has been limited to two-syllable words. The current study uses auditory lexical decision and delayed word identification tasks to test a hypothesized English Word Stress Error Gravity Hierarchy for words of two to five syllables. Results indicate that English word stress errors affect intelligibility most when they introduce concomitant vowel errors, an effect that is somewhat mediated by the direction of stress shift. As a consequence, the relative intelligibility impact of any particular lexical stress error can be predicted by the Hierarchy for both L1 and L2 English listeners. These findings have implications for L1 and L2 English pronunciation research and teaching. For research, our results demonstrate that varied findings about loss of intelligibility are connected to vowel quality changes of word stress errors and that these factors must be accounted for in intelligibility research. For teaching, the results indicate that not all word stress errors are equally important, and that only word stress errors that affect vowel quality should be prioritized.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Teresa Sylvester ◽  
Johanna Liebig ◽  
Arthur M. Jacobs

AbstractThe goal of the present study was to investigate whether 6–9-year old children and adults show similar neural responses to affective words. An event-related neuroimaging paradigm was used in which both age cohorts performed the same auditory lexical decision task (LDT). The results show similarities in (auditory) lexico-semantic network activation as well as in areas associated with affective information. In both age cohorts’ activations were stronger for positive than for negative words, thus exhibiting a positivity superiority effect. Children showed less activation in areas associated with affective information in response to all three valence categories than adults. Our results are discussed in the light of computational models of word recognition, and previous findings of affective contributions to LDT in adults.


Sign in / Sign up

Export Citation Format

Share Document