word identification
Recently Published Documents


TOTAL DOCUMENTS

618
(FIVE YEARS 96)

H-INDEX

55
(FIVE YEARS 3)

Author(s):  
Jonathan Mirault ◽  
Charlotte Leflaëc ◽  
Jonathan Grainger

AbstractIn two on-line experiments (N = 386) we asked participants to make speeded grammatical decisions to a mixture of syntactically correct sentences and ungrammatical sequences of words. In Experiment 1, the ungrammatical sequences were formed by transposing two inner words in a correct sentence (e.g., the brave daunt the wind / the daunt brave the wind), and we manipulated the orthographic relatedness of the two transposed words (e.g., the brave brace the wind / the brace brave the wind). We found inhibitory effects of orthographic relatedness in decisions to both the correct sentences and the ungrammatical transposed-word sequences. In Experiment 2, we further investigated the impact of orthographic relatedness on transposed-word effects by including control ungrammatical sequences that were matched to the transposed-word sequences. We replicated the inhibitory effects of orthographic relatedness on both grammatical and ungrammatical decisions and found that transposed-word effects were not influenced by this factor. We conclude that orthographic relatedness across adjacent words impacts on processes involved in parallel word identification for sentence comprehension, but not on the association of word identities to positions in a sequence.


Author(s):  
Samuel Evans ◽  
Stuart Rosen

Purpose: Many children have difficulties understanding speech. At present, there are few assessments that test for subtle impairments in speech perception with normative data from U.K. children. We present a new test that evaluates children's ability to identify target words in background noise by choosing between minimal pair alternatives that differ by a single articulatory phonetic feature. This task (a) is tailored to testing young children, but also readily applicable to adults; (b) has minimal memory demands; (c) adapts to the child's ability; and (d) does not require reading or verbal output. Method: We tested 155 children and young adults aged from 5 to 25 years on this new test of single word perception. Results: Speech-in-noise abilities in this particular task develop rapidly through childhood until they reach maturity at around 9 years of age. Conclusions: We make this test freely available and provide associated normative data. We hope that it will be useful to researchers and clinicians in the assessment of speech perception abilities in children who are hard of hearing or have developmental language disorder, dyslexia, or auditory processing disorder. Supplemental Material https://doi.org/10.23641/asha.17155934


2021 ◽  
Vol 12 ◽  
Author(s):  
Ana Marcet ◽  
María Fernández-López ◽  
Melanie Labusch ◽  
Manuel Perea

Recent research has found that the omission of accent marks in Spanish does not produce slower word identification times in go/no-go lexical decision and semantic categorization tasks [e.g., cárcel (prison) = carcel], thus suggesting that vowels like á and a are represented by the same orthographic units during word recognition and reading. However, there is a discrepant finding with the yes/no lexical decision task, where the words with the omitted accent mark produced longer response times than the words with the accent mark. In Experiment 1, we examined this discrepant finding by running a yes/no lexical decision experiment comparing the effects for words and non-words. Results showed slower response times for the words with omitted accent mark than for those with the accent mark present (e.g., cárcel < carcel). Critically, we found the opposite pattern for non-words: response times were longer for the non-words with accent marks (e.g., cárdil > cardil), thus suggesting a bias toward a “word” response for accented items in the yes/no lexical decision task. To test this interpretation, Experiment 2 used the same stimuli with a blocked design (i.e., accent mark present vs. omitted in all items) and a go/no-go lexical decision task (i.e., respond only to “words”). Results showed similar response times to words regardless of whether the accent mark was omitted (e.g., cárcel = carcel). This pattern strongly suggests that the longer response times to words with an omitted accent mark in yes/no lexical decision experiments are a task-dependent effect rather than a genuine reading cost.


2021 ◽  
Author(s):  
Katrina Sue McClannahan ◽  
Amelia Mainardi ◽  
Austin Luor ◽  
Yi-Fang Chiu ◽  
Mitchell S. Sommers ◽  
...  

BackgroundDifficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. In background noise, however, higher-level cognitive processes play a more substantial role in successful communication. Cognitive resources are often limited in adults with dementia, which may therefore hamper word recognition. ObjectiveThe goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise.MethodsParticipants were adults aged 53–86 years with (n=16) or without (n=32) dementia symptoms as classified by a clinical dementia rating scale. Participants performed a word identification task with two levels of neighborhood density in quiet and in speech shaped noise at two signal-to-noise ratios (SNRs), +6 dB and +3 dB. Our hypothesis was that listeners with mild dementia would have more difficulty with speech perception in noise under conditions that tax cognitive resources. ResultsListeners with mild dementia had poorer speech perception accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, phonological neighborhood density was not a factor in the identification task performance for either group.ConclusionThese results affirm the difficulty that listeners with mild dementia have with spoken word recognition, both in quiet and in background noise, consistent with a key role of cognitive resources in spoken word identification. However, the impact of neighborhood density in these listeners is less clear.


2021 ◽  
Author(s):  
◽  
Anna Elisabeth Piasecki

<p>Systematic psycholinguistic research has considered the nature of the coexistence of two (or more) languages in the cognitive system of a fluent bilingual speaker. There is increasing consensus that when a bilingual is presented with a visual stimulus in one language, both of their languages are initially activated (non-selective access; e.g. Dijkstra & van Heuven 2002a). However, more recent research shows that certain factors may constrain (or eliminate) the activation of a task-irrelevant language (Duyck, van Assche, Drieghe, & Hartsuiker 2007; Elston-Güttler, Gunter, & Kotz 2005). The objective of the research in this thesis was to investigate how cross-linguistic activation is modulated by specific characteristics of a bilingual’s languages. This exploration was mainly limited to an under-investigated area, namely early sub-lexical word processing. The first of two studies focussed on word processing in the presence or absence of critical sub-lexical information. Specifically, I investigated whether onset capitals – a prominent marker indicating nouns in German – acted as a language-specific cue, and the extent to which this cue constrains competitive, lexical interaction between the bilingual’s languages (e.g. Hose-hose, the first being a German word meaning ‘trousers’ in English). This study also considered the extent to which the use of such information is affected by priming for a specific language from a preceding context sentence. The second study arose from a claim that readers employ distinct sub-lexical reading strategies, depending on the extent of spelling-to-sound (in)consistency in their language (e.g. Ziegler, Perry, Jacobs, & Braun 2001). Employing a bilingual population whose two languages were clearly distinguished in terms of such consistency, I explored the reading strategy used by bilingual participants reading in each language. A key issue is competitive activation between sub-lexical orthographic and phonological representations across languages. Each study was conducted with two groups of bilingual speakers, English-German and German-English. Individuals varied in their L2 proficiency, allowing a test of whether sub-lexical processing changed as a consequence of increasing proficiency. The main results from study one demonstrate that bilingual speakers are dependent upon sub-lexical, language-specific information. However, this is influenced by L2 proficiency, with a stronger effect for lower proficiency bilinguals. In addition, lower proficiency bilinguals were more dependent on sub-lexical cues when primed by a sentence in L2. In contrast, bilingual speakers performing in their L1 used these cues largely under very specific circumstances, i.e. when they did not know an item. The central finding of study two is that competition between sub-lexical orthographic and phonological representations across languages largely depends on the amount of spelling-to-sound (in)consistency in the bilinguals’ more dominant language. This is reflected in (1) slower identification of orthographically similar cognates which map onto different phonological representations across two languages, and (2) slower identification of cognates which do not share the same orthographic form across languages but have a common phonological representation. In addition, increasing L2 proficiency is reflected in attenuation of certain effects as processing becomes more automatic, and the development of a common reading strategy accommodating reading in either language. A major contribution of the research conducted is what findings from both studies reveal about how the bilingual lexicon develops as proficiency increases. Furthermore, the findings contribute to our understanding of the organisation of the bilingual mental lexicon and the processes of word identification, and impose constraints on possible cognitive architectures.</p>


2021 ◽  
Author(s):  
◽  
Anna Elisabeth Piasecki

<p>Systematic psycholinguistic research has considered the nature of the coexistence of two (or more) languages in the cognitive system of a fluent bilingual speaker. There is increasing consensus that when a bilingual is presented with a visual stimulus in one language, both of their languages are initially activated (non-selective access; e.g. Dijkstra & van Heuven 2002a). However, more recent research shows that certain factors may constrain (or eliminate) the activation of a task-irrelevant language (Duyck, van Assche, Drieghe, & Hartsuiker 2007; Elston-Güttler, Gunter, & Kotz 2005). The objective of the research in this thesis was to investigate how cross-linguistic activation is modulated by specific characteristics of a bilingual’s languages. This exploration was mainly limited to an under-investigated area, namely early sub-lexical word processing. The first of two studies focussed on word processing in the presence or absence of critical sub-lexical information. Specifically, I investigated whether onset capitals – a prominent marker indicating nouns in German – acted as a language-specific cue, and the extent to which this cue constrains competitive, lexical interaction between the bilingual’s languages (e.g. Hose-hose, the first being a German word meaning ‘trousers’ in English). This study also considered the extent to which the use of such information is affected by priming for a specific language from a preceding context sentence. The second study arose from a claim that readers employ distinct sub-lexical reading strategies, depending on the extent of spelling-to-sound (in)consistency in their language (e.g. Ziegler, Perry, Jacobs, & Braun 2001). Employing a bilingual population whose two languages were clearly distinguished in terms of such consistency, I explored the reading strategy used by bilingual participants reading in each language. A key issue is competitive activation between sub-lexical orthographic and phonological representations across languages. Each study was conducted with two groups of bilingual speakers, English-German and German-English. Individuals varied in their L2 proficiency, allowing a test of whether sub-lexical processing changed as a consequence of increasing proficiency. The main results from study one demonstrate that bilingual speakers are dependent upon sub-lexical, language-specific information. However, this is influenced by L2 proficiency, with a stronger effect for lower proficiency bilinguals. In addition, lower proficiency bilinguals were more dependent on sub-lexical cues when primed by a sentence in L2. In contrast, bilingual speakers performing in their L1 used these cues largely under very specific circumstances, i.e. when they did not know an item. The central finding of study two is that competition between sub-lexical orthographic and phonological representations across languages largely depends on the amount of spelling-to-sound (in)consistency in the bilinguals’ more dominant language. This is reflected in (1) slower identification of orthographically similar cognates which map onto different phonological representations across two languages, and (2) slower identification of cognates which do not share the same orthographic form across languages but have a common phonological representation. In addition, increasing L2 proficiency is reflected in attenuation of certain effects as processing becomes more automatic, and the development of a common reading strategy accommodating reading in either language. A major contribution of the research conducted is what findings from both studies reveal about how the bilingual lexicon develops as proficiency increases. Furthermore, the findings contribute to our understanding of the organisation of the bilingual mental lexicon and the processes of word identification, and impose constraints on possible cognitive architectures.</p>


Author(s):  
Mara De Rosa ◽  
Davide Crepaldi

AbstractResearch on visual word identification has extensively investigated the role of morphemes, recurrent letter chunks that convey a fairly regular meaning (e.g., lead-er-ship). Masked priming studies highlighted morpheme identification in complex (e.g., sing-er) and pseudo-complex (corn-er) words, as well as in nonwords (e.g., basket-y). The present study investigated whether such sensitivity to morphemes could be rooted in the visual system sensitivity to statistics of letter (co)occurrence. To this aim, we assessed masked priming as induced by nonword primes obtained by combining a stem (e.g., bulb) with (i) naturally frequent, derivational suffixes (e.g., -ment), (ii) non-morphological, equally frequent word-endings (e.g., -idge), and (iii) non-morphological, infrequent word-endings (e.g., -kle). In two additional tasks, we collected interpretability and word-likeness measures for morphologically-structured nonwords, to assess whether priming is modulated by such factors. Results indicate that masked priming is not affected by either the frequency or the morphological status of word-endings, a pattern that was replicated in a second experiment including also lexical primes. Our findings are in line with models of early visual processing based on automatic stem/word extraction, and rule out letter chunk frequency as a main player in the early stages of visual word identification. Nonword interpretability and word-likeness do not affect this pattern.


2021 ◽  
Vol 60 ◽  
pp. 101024
Author(s):  
Er-Hu Zhang ◽  
Xue-Xian Lai ◽  
Defeng Li ◽  
Victoria Lai Cheng Lei ◽  
Yiqiang Chen ◽  
...  

2021 ◽  
Vol 3 (4) ◽  
Author(s):  
Girma Yohannis Bade

This article reviews Natural Language Processing (NLP) and its challenge on Omotic language groups. All technological achievements are partially fuelled by the recent developments in NLP. NPL is one of component of an artificial intelligence (AI) and offers the facility to the companies that need to analyze their reliable business data. However, there are many challenges that tackle the effectiveness of NLP applications on Omotic language groups (Ometo) of Ethiopia. These challenges are irregularity of the words, stop word identification problem, compounding and languages ‘digital data resource limitation. Thus, this study opens the room to the upcoming researchers to further investigate the NLP application on these language groups.


PLoS Biology ◽  
2021 ◽  
Vol 19 (10) ◽  
pp. e3001410
Author(s):  
Mohsen Alavash ◽  
Sarah Tune ◽  
Jonas Obleser

In multi-talker situations, individuals adapt behaviorally to the listening challenge mostly with ease, but how do brain neural networks shape this adaptation? We here establish a long-sought link between large-scale neural communications in electrophysiology and behavioral success in the control of attention in difficult listening situations. In an age-varying sample of N = 154 individuals, we find that connectivity between intrinsic neural oscillations extracted from source-reconstructed electroencephalography is regulated according to the listener’s goal during a challenging dual-talker task. These dynamics occur as spatially organized modulations in power-envelope correlations of alpha and low-beta neural oscillations during approximately 2-s intervals most critical for listening behavior relative to resting-state baseline. First, left frontoparietal low-beta connectivity (16 to 24 Hz) increased during anticipation and processing of spatial-attention cue before speech presentation. Second, posterior alpha connectivity (7 to 11 Hz) decreased during comprehension of competing speech, particularly around target-word presentation. Connectivity dynamics of these networks were predictive of individual differences in the speed and accuracy of target-word identification, respectively, but proved unconfounded by changes in neural oscillatory activity strength. Successful adaptation to a listening challenge thus latches onto 2 distinct yet complementary neural systems: a beta-tuned frontoparietal network enabling the flexible adaptation to attentive listening state and an alpha-tuned posterior network supporting attention to speech.


Sign in / Sign up

Export Citation Format

Share Document