EXPRESS: Does omitting the accent mark in a word affect sentence reading? Evidence from Spanish

2021 ◽  
pp. 174702182110446
Author(s):  
Ana Marcet ◽  
Manuel Perea

Lexical stress in multisyllabic words is consistent in some languages (e.g., first syllable in Finnish), but it is variable in others (e.g., Spanish, English). To help lexical processing in a transparent language like Spanish, scholars have proposed a set of rules specifying which words require an accent mark indicating lexical stress in writing. However, recent word recognition using that lexical decision showed that word identification times were not affected by the omission of a word's accent mark in Spanish. To examine this question in a paradigm with greater ecological validity, we tested whether omitting the accent mark in a Spanish word had a deleterious effect during silent sentence reading. A target word was embedded in a sentence with its accent mark or not. Results showed no reading cost of omitting the word's accent mark in first-pass eye fixation durations, but we found a cost in the total reading time spent on the target word (i.e., including re-reading). Thus, the omission of an accent mark delays late, but not early, lexical processing in Spanish. These findings help constrain the locus of accent mark information in models of visual word recognition and reading. Furthermore, these findings offer some clues on how to simplify the Spanish rules of accentuation.

Author(s):  
Leigh B. Fernandez ◽  
Christoph Scheepers ◽  
Shanley E. M. Allen

AbstractIn this study we investigated parafoveal processing by L1 and late L2 speakers of English (L1 German) while reading in English. We hypothesized that L2ers would make use of semantic and orthographic information parafoveally. Using the gaze contingent boundary paradigm, we manipulated six parafoveal masks in a sentence (Mark found th*e wood for the fire; * indicates the invisible boundary): identical word mask (wood), English orthographic mask (wook), English string mask (zwwl), German mask (holz), German orthographic mask (holn), and German string mask (kxfs). We found an orthographic benefit for L1ers and L2ers when the mask was orthographically related to the target word (wood vs. wook) in line with previous L1 research. English L2ers did not derive a benefit (rather an interference) when a non-cognate translation mask from their L1 was used (wood vs. holz), but did derive a benefit from a German orthographic mask (wood vs. holn). While unexpected, it may be that L2ers incur a switching cost when the complete German word is presented parafoveally, and derive a benefit by keeping both lexicons active when a partial German word is presented parafoveally (narrowing down lexical candidates). To the authors’ knowledge there is no mention of parafoveal processing in any model of L2 processing/reading, and the current study provides the first evidence for a parafoveal non-cognate orthographic benefit (but only with partial orthographic overlap) in sentence reading for L2ers. We discuss how these findings fit into the framework of bilingual word recognition theories.


2020 ◽  
Vol 31 (06) ◽  
pp. 412-441 ◽  
Author(s):  
Richard H. Wilson ◽  
Victoria A. Sanchez

Abstract Background In the 1950s, with monitored live voice testing, the vu meter time constant and the short durations and amplitude modulation characteristics of monosyllabic words necessitated the use of the carrier phrase amplitude to monitor (indirectly) the presentation level of the words. This practice continues with recorded materials. To relieve the carrier phrase of this function, first the influence that the carrier phrase has on word recognition performance needs clarification, which is the topic of this study. Purpose Recordings of Northwestern University Auditory Test No. 6 by two female speakers were used to compare word recognition performances with and without the carrier phrases when the carrier phrase and test word were (1) in the same utterance stream with the words excised digitally from the carrier (VA-1 speaker) and (2) independent of one another (VA-2 speaker). The 50-msec segment of the vowel in the target word with the largest root mean square amplitude was used to equate the target word amplitudes. Research Design A quasi-experimental, repeated measures design was used. Study Sample Twenty-four young normal-hearing adults (YNH; M = 23.5 years; pure-tone average [PTA] = 1.3-dB HL) and 48 older hearing loss listeners (OHL; M = 71.4 years; PTA = 21.8-dB HL) participated in two, one-hour sessions. Data Collection and Analyses Each listener had 16 listening conditions (2 speakers × 2 carrier phrase conditions × 4 presentation levels) with 100 randomized words, 50 different words by each speaker. Each word was presented 8 times (2 carrier phrase conditions × 4 presentation levels [YNH, 0- to 24-dB SL; OHL, 6- to 30-dB SL]). The 200 recorded words for each condition were randomized as 8, 25-word tracks. In both test sessions, one practice track was followed by 16 tracks alternated between speakers and randomized by blocks of the four conditions. Central tendency and repeated measures analyses of variance statistics were used. Results With the VA-1 speaker, the overall mean recognition performances were 6.0% (YNH) and 8.3% (OHL) significantly better with the carrier phrase than without the carrier phrase. These differences were in part attributed to the distortion of some words caused by the excision of the words from the carrier phrases. With the VA-2 speaker, recognition performances on the with and without carrier phrase conditions by both listener groups were not significantly different, except for one condition (YNH listeners at 8-dB SL). The slopes of the mean functions were steeper for the YNH listeners (3.9%/dB to 4.8%/dB) than for the OHL listeners (2.4%/dB to 3.4%/dB) and were <1%/dB steeper for the VA-1 speaker than for the VA-2 speaker. Although the mean results were clear, the variability in performance differences between the two carrier phrase conditions for the individual participants and for the individual words was striking and was considered in detail. Conclusion The current data indicate that word recognition performances with and without the carrier phrase (1) were different when the carrier phrase and target word were produced in the same utterance with poorer performances when the target words were excised from their respective carrier phrases (VA-1 speaker), and (2) were the same when the carrier phrase and target word were produced as independent utterances (VA-2 speaker).


2016 ◽  
Vol 39 (2) ◽  
pp. 257-285 ◽  
Author(s):  
Özgür Parlak ◽  
Nicole Ziegler

Although previous research has demonstrated the efficacy of recasts on second language (L2) morphology and lexis (e.g., Li, 2010; Mackey & Goo, 2007), few studies have examined their effect on learners’ phonological development (although see Saito, 2015; Saito & Lyster, 2012). The current study investigates the impact of recasts on the development of lexical stress, defined as the placement of emphasis on a particular syllable within a word by making it louder and longer, in oral synchronous computer-mediated communication (SCMC) and face-to-face (FTF) interaction. Using a pretest-posttest design, intermediate learners of English were randomly assigned to one of four groups: FTF recast, SCMC recast, FTF control, or SCMC control. Pre- and posttests consisted of sentence-reading and information-exchange tasks, while the treatment was an interactive role-play task. Syllable duration, intensity, and pitch were used to analyze learners’ development of stress placement. The statistical analyses of the acoustic correlates did not yield significant differences. However, the observed patterns suggest that there is need for further investigation to understand the relationship between recasts and development of lexical stress.


2018 ◽  
Vol 61 (6) ◽  
pp. 1409-1425 ◽  
Author(s):  
Julia L. Evans ◽  
Ronald B. Gillam ◽  
James W. Montgomery

Purpose This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Method Participants included 234 children (aged 7;0–11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Results Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Conclusion Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.


2018 ◽  
Vol 10 (4) ◽  
pp. 441-470 ◽  
Author(s):  
Yanjiao Zhu ◽  
Peggy Pik Ki Mok

Abstract Previous studies on bilingual visual word recognition have been mainly based on European participants, while less is understood about Asian populations. In this study, the recognition of German-English cognates and interlingual homographs in lexical decision tasks was examined in the two non-native languages of Cantonese-English-German trilinguals. In the L2 English task, cognates were reacted to faster and more accurately than their matched non-cognates, while in the equivalent L3 German task, no cognate facilitation effect was found. However, cognate facilitation effects on response time and accuracy were observed in another L3 German task including cognates and interlingual homographs. The study suggests that Asian trilinguals access L2 and L3 in a language non-selective manner, despite their low proficiency in the recently acquired L3. Meanwhile, lexical processing in a non-proficient L3 is to a great extent affected by multiple contextual factors.


Author(s):  
Christina Blomquist ◽  
Rochelle S. Newman ◽  
Yi Ting Huang ◽  
Jan Edwards

Purpose Children with cochlear implants (CIs) are more likely to struggle with spoken language than their age-matched peers with normal hearing (NH), and new language processing literature suggests that these challenges may be linked to delays in spoken word recognition. The purpose of this study was to investigate whether children with CIs use language knowledge via semantic prediction to facilitate recognition of upcoming words and help compensate for uncertainties in the acoustic signal. Method Five- to 10-year-old children with CIs heard sentences with an informative verb ( draws ) or a neutral verb ( gets ) preceding a target word ( picture ). The target referent was presented on a screen, along with a phonologically similar competitor ( pickle ). Children's eye gaze was recorded to quantify efficiency of access of the target word and suppression of phonological competition. Performance was compared to both an age-matched group and vocabulary-matched group of children with NH. Results Children with CIs, like their peers with NH, demonstrated use of informative verbs to look more quickly to the target word and look less to the phonological competitor. However, children with CIs demonstrated less efficient use of semantic cues relative to their peers with NH, even when matched for vocabulary ability. Conclusions Children with CIs use semantic prediction to facilitate spoken word recognition but do so to a lesser extent than children with NH. Children with CIs experience challenges in predictive spoken language processing above and beyond limitations from delayed vocabulary development. Children with CIs with better vocabulary ability demonstrate more efficient use of lexical-semantic cues. Clinical interventions focusing on building knowledge of words and their associations may support efficiency of spoken language processing for children with CIs. Supplemental Material https://doi.org/10.23641/asha.14417627


2021 ◽  
pp. 174702182110645
Author(s):  
Fengjiao Cong ◽  
Baoguo Chen

We conducted three eye movement experiments to investigate the mechanism for coding letter positions in a person’s second language during sentence reading; we also examined the role of morphology in this process with more rigorous manipulation. Given that readers not only obtain information from currently fixated words (i.e., the foveal area) but also from upcoming words (i.e., the parafoveal area) to guide their reading, we examined both when the targets were fixated (Exp. 1) and when the targets were seen parafoveally (Exp. 2 and Exp. 3). First, we found the classic transposed letter (TL) effect in Exp. 1, but not in Exp. 2 or Exp. 3. This implies that flexible letter position coding exists during sentence reading. However, this was limited to words located in the foveal area, suggesting that L2 readers whose L2 proficiency is not as high as skilled native readers are not able to extract and utilize the parafoveal letter identity and position information of a word, whether the word length is long (Exp. 2) or short (Exp. 3). Second, we found morphological information to influence the magnitude of the TL effect in Exp. 1. These results provide new eye movement evidence for the flexibility of L2 letter position coding during sentence reading, as well as the interactions between the different internal representations of words in this process. Altogether, this is helpful for understanding L2 sentence reading and visual word recognition. Thus, future L2 reading frameworks should integrate word recognition and eye movement control models.


2019 ◽  
Vol 9 (18) ◽  
pp. 3870 ◽  
Author(s):  
Helen L. Bear ◽  
Richard Harvey

Lipreading is understanding speech from observed lip movements. An observed series of lip motions is an ordered sequence of visual lip gestures. These gestures are commonly known, but as yet are not formally defined, as `visemes’. In this article, we describe a structured approach which allows us to create speaker-dependent visemes with a fixed number of visemes within each set. We create sets of visemes for sizes two to 45. Each set of visemes is based upon clustering phonemes, thus each set has a unique phoneme-to-viseme mapping. We first present an experiment using these maps and the Resource Management Audio-Visual (RMAV) dataset which shows the effect of changing the viseme map size in speaker-dependent machine lipreading and demonstrate that word recognition with phoneme classifiers is possible. Furthermore, we show that there are intermediate units between visemes and phonemes which are better still. Second, we present a novel two-pass training scheme for phoneme classifiers. This approach uses our new intermediary visual units from our first experiment in the first pass as classifiers; before using the phoneme-to-viseme maps, we retrain these into phoneme classifiers. This method significantly improves on previous lipreading results with RMAV speakers.


Sign in / Sign up

Export Citation Format

Share Document