scholarly journals When too many vowels impede language processing: An eye-tracking study of Danish-learning children

2019 ◽  
Author(s):  
Fabio Trecca ◽  
Dorthe Bleses ◽  
Anders Højen ◽  
Thomas O. Madsen ◽  
Morten H. Christiansen

Research has suggested that Danish-learning children lag behind in early language acquisition. The phenomenon has been attributed to the opaque phonetic structure of Danish, which features an unusually large number of non-consonantal sounds (i.e., vowels and semivowels/glides). The large amount of vocalic sounds in speech is thought to provide fewer cues to word segmentation and to make language processing harder, thus hindering the acquisition process. In this study, we explored whether the presence of vocalic sounds at word boundaries impedes real-time speech processing in 24-month-old Danish-learning children, compared to word boundaries that are marked by consonantal sounds. Using eye-tracking, we tested children’s real-time comprehension of known consonant-initial and vowel-initial words, when presented in either a consonant-final carrier phrase or in a vowel-final carrier phrase, thus resulting in the four boundary types C#C, C#V, V#C, and V#V. Our results showed that the presence of vocalic sounds around a word boundary—especially before—impedes processing of Danish child-directed sentences.

2020 ◽  
Vol 63 (4) ◽  
pp. 898-918
Author(s):  
Fabio Trecca ◽  
Dorthe Bleses ◽  
Anders Højen ◽  
Thomas O Madsen ◽  
Morten H Christiansen

Research has suggested that Danish-learning children lag behind in early language acquisition. The phenomenon has been attributed to the opaque phonetic structure of Danish, which features an unusually large number of non-consonantal sounds (i.e., vowels and semivowels/glides). The large number of vocalic sounds in speech is thought to provide fewer cues to word segmentation and to make language processing harder, thus hindering the acquisition process. In this study, we explored whether the presence of vocalic sounds at word boundaries impedes real-time speech processing in 24-month-old Danish-learning children, compared to word boundaries that are marked by consonantal sounds. Using eye-tracking, we tested children’s real-time comprehension of known consonant-initial and vowel-initial words when presented in either a consonant-final carrier phrase or in a vowel-final carrier phrase, thus resulting in the four boundary types C#C, C#V, V#C, and V#V. Our results showed that the presence of vocalic sounds around a word boundary—especially before—impedes processing of Danish child-directed sentences.


2019 ◽  
Author(s):  
Fabio Trecca ◽  
Dorthe Bleses ◽  
Anders Højen ◽  
Thomas O. Madsen ◽  
Morten H. Christiansen

Research has suggested that Danish-learning children lag behind in early language acquisition. The phenomenon has been attributed to the opaque phonetic structure of Danish, which features an unusually large number of non-consonantal sounds (i.e., vowels and semivowels/glides). The large amount of vocalic sounds in speech is thought to provide fewer cues to word segmentation and to make language processing harder, thus hindering the acquisition process. In this study, we explored whether the presence of vocalic sounds at word boundaries impedes real-time speech processing in 24-month-old Danish-learning children, compared to word boundaries that are marked by consonantal sounds. Using eye-tracking, we tested children’s real-time comprehension of known consonant-initial and vowel-initial words, when presented in either a consonant-final carrier phrase or in a vowel-final carrier phrase, thus resulting in the four boundary types C#C, C#V, V#C, and V#V. Our results showed that the presence of vocalic sounds around a word boundary—especially before—impedes processing of Danish child-directed sentences.


2013 ◽  
Vol 35 (2) ◽  
pp. 213-235 ◽  
Author(s):  
Leah Roberts ◽  
Anna Siyanova-Chanturia

Second language (L2) researchers are becoming more interested in both L2 learners’ knowledge of the target language and how that knowledge is put to use during real-time language processing. Researchers are therefore beginning to see the importance of combining traditional L2 research methods with those that capture the moment-by-moment interpretation of the target language, such as eye-tracking. The major benefit of the eye-tracking method is that it can tap into real-time (or online) comprehension processes during the uninterrupted processing of the input, and thus, the data can be compared to those elicited by other, more metalinguistic tasks to offer a broader picture of language acquisition and processing. In this article, we present an overview of the eye-tracking technique and illustrate the method with L2 studies that show how eye-tracking data can be used to (a) investigate language-related topics and (b) inform key debates in the fields of L2 acquisition and L2 processing.


2018 ◽  
Vol 49 (08) ◽  
pp. 1335-1345 ◽  
Author(s):  
Hugh Rabagliati ◽  
Nathaniel Delaney-Busch ◽  
Jesse Snedeker ◽  
Gina Kuperberg

AbstractBackgroundPeople with schizophrenia process language in unusual ways, but the causes of these abnormalities are unclear. In particular, it has proven difficult to empirically disentangle explanations based on impairments in the top-down processing of higher level information from those based on the bottom-up processing of lower level information.MethodsTo distinguish these accounts, we used visual-world eye tracking, a paradigm that measures spoken language processing during real-world interactions. Participants listened to and then acted out syntactically ambiguous spoken instructions (e.g. ‘tickle the frog with the feather’, which could either specify how to tickle a frog, or which frog to tickle). We contrasted how 24 people with schizophrenia and 24 demographically matched controls used two types of lower level information (prosody and lexical representations) and two types of higher level information (pragmatic and discourse-level representations) to resolve the ambiguous meanings of these instructions. Eye tracking allowed us to assess how participants arrived at their interpretation in real time, while recordings of participants’ actions measured how they ultimately interpreted the instructions.ResultsWe found a striking dissociation in participants’ eye movements: the two groups were similarly adept at using lower level information to immediately constrain their interpretations of the instructions, but only controls showed evidence of fast top-down use of higher level information. People with schizophrenia, nonetheless, did eventually reach the same interpretations as controls.ConclusionsThese data suggest that language abnormalities in schizophrenia partially result from a failure to use higher level information in a top-down fashion, to constrain the interpretation of language as it unfolds in real time.


2021 ◽  
Vol 12 ◽  
Author(s):  
Elif Canseza Kaplan ◽  
Anita E. Wagner ◽  
Paolo Toffanin ◽  
Deniz Başkent

Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception (sentence recall task), which reveals a post-task response, and online measures of real time spoken language processing: gaze-tracking and pupillometry. We used stimuli of comparable complexity across both paradigms and tested the same groups of participants. In the sentence recall task, musicians recalled more words correctly than non-musicians. In the eye-tracking experiment, both groups showed reduced fixations to the target and competitor words’ images as the level of speech maskers increased. The time course of gaze fixations to the competitor did not differ between groups in the speech-in-quiet condition, while the time course dynamics did differ between groups as the two-talker masker was added to the target signal. As the level of two-talker masker increased, musicians showed reduced lexical competition as indicated by the gaze fixations to the competitor. The pupil dilation data showed differences mainly in one target-to-masker ratio. This does not allow to draw conclusions regarding potential differences in the use of cognitive resources between groups. Overall, the eye-tracking measure enabled us to observe that musicians may be using a different strategy than non-musicians to attain spoken word recognition as the noise level increased. However, further investigation with more fine-grained alignment between the processes captured by online and offline measures is necessary to establish whether musicians differ due to better cognitive control or sound processing.


Author(s):  
Parham Shahidi ◽  
Steve C. Southward ◽  
Mehdi Ahmadian

A novel real-time algorithm has been developed for estimating temporal word boundaries in measured speech without the need for interpretation of individual words. This algorithm is the foundational building block of a method for estimating a variety of key metrics such as word production rate, phrase production rate, words per phrase, etc., that are indicative of human mental states. In particular, we are interested in developing a system for monitoring locomotive crew alertness. The majority of existing speech processing algorithms relies on pre-recorded speech corpora. The real-time algorithm presented here is unique in that it employs a simple and efficient pattern matching method to identify temporal word boundaries by monitoring threshold crossings in the speech power signal. This algorithm eliminates the need to interpret the speech, and still produces reasonable estimates of word boundaries. The proposed algorithm has been tested with a batch of experimentally recorded speech data and with real time speech data. The results from the testing are outlined in this paper.


2002 ◽  
Vol 24 (2) ◽  
pp. 249-260 ◽  
Author(s):  
Susan M. Gass ◽  
Alison Mackey

In this response to Ellis's target article on frequency in language processing, language use, and language acquisition, we argue in favor of a role for frequency in several areas of second language acquisition, including interactional input and output and speech processing. We also discuss areas where second language acquisition appears to proceed along its own route and at its own pace regardless of the frequency of the input, as well as areas where input is infrequent but acquisition appears to be unimpeded. Our response is intended to highlight the complexity of the task of deciphering the role and importance of frequency.


2021 ◽  
Author(s):  
L Roberts ◽  
Anna Siyanova

Second language (L2) researchers are becoming more interested in both L2 learners' knowledge of the target language and how that knowledge is put to use during real-time language processing. Researchers are therefore beginning to see the importance of combining traditional L2 research methods with those that capture the moment-by-moment interpretation of the target language, such as eye-tracking. The major benefit of the eye-tracking method is that it can tap into real-time (or online) comprehension processes during the uninterrupted processing of the input, and thus, the data can be compared to those elicited by other, more metalinguistic tasks to offer a broader picture of language acquisition and processing. In this article, we present an overview of the eye-tracking technique and illustrate the method with L2 studies that show how eye-tracking data can be used to (a) investigate language-related topics and (b) inform key debates in the fields of L2 acquisition and L2 processing. Copyright © Cambridge University Press 2013.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249309
Author(s):  
Theresa Redl ◽  
Stefan L. Frank ◽  
Peter de Swart ◽  
Helen de Hoop

Two experiments tested whether the Dutch possessive pronoun zijn ‘his’ gives rise to a gender inference and thus causes a male bias when used generically in sentences such as Everyone was putting on his shoes. Experiment 1 (N = 120, 48 male) was a conceptual replication of a previous eye-tracking study that had not found evidence of a male bias. The results of the current eye-tracking experiment showed the generically-intended masculine pronoun to trigger a gender inference and cause a male bias, but for male participants and in stereotypically neutral stereotype contexts only. No evidence for a male bias was thus found in stereotypically female and male context nor for female participants altogether. Experiment 2 (N = 80, 40 male) used the same stimuli as Experiment 1, but employed the sentence evaluation paradigm. No evidence of a male bias was found in Experiment 2. Taken together, the results suggest that the generically-intended masculine pronoun zijn ‘his’ can cause a male bias for male participants even when the referents are previously introduced by inclusive and grammatically gender-unmarked iedereen ‘everyone’. This male bias surfaces with eye-tracking, which taps directly into early language processing, but not in offline sentence evaluations. Furthermore, the results suggest that the intended generic reading of the masculine possessive pronoun zijn ‘his’ is more readily available for women than for men.


2021 ◽  
Author(s):  
L Roberts ◽  
Anna Siyanova

Second language (L2) researchers are becoming more interested in both L2 learners' knowledge of the target language and how that knowledge is put to use during real-time language processing. Researchers are therefore beginning to see the importance of combining traditional L2 research methods with those that capture the moment-by-moment interpretation of the target language, such as eye-tracking. The major benefit of the eye-tracking method is that it can tap into real-time (or online) comprehension processes during the uninterrupted processing of the input, and thus, the data can be compared to those elicited by other, more metalinguistic tasks to offer a broader picture of language acquisition and processing. In this article, we present an overview of the eye-tracking technique and illustrate the method with L2 studies that show how eye-tracking data can be used to (a) investigate language-related topics and (b) inform key debates in the fields of L2 acquisition and L2 processing. Copyright © Cambridge University Press 2013.


Sign in / Sign up

Export Citation Format

Share Document