Naturalistic and Elicited Data in Grammatical Studies of Codeswitching

Author(s):  
Jeff MacSwan ◽  
Kara T. McAlister

AbstractThe authors discuss the merits of naturalistic and elicited data in the study of grammatical aspects of codeswitching. Three limitations of naturalistic data are discussed, including the problems of negative evidence, induction, and unidentified performance error. The authors recommend the use of language surveys as a tool for overcoming limitations of elicited grammaticality judgment data.

2022 ◽  
Vol 1 ◽  
pp. 1
Author(s):  
Ben Ambridge ◽  
Laura Doherty ◽  
Ramya Maitreyee ◽  
Tomoko Tatsumi ◽  
Shira Zicherman ◽  
...  

How do language learners avoid the production of verb argument structure overgeneralization errors (*The clown laughed the man c.f. The clown made the man laugh), while retaining the ability to apply such generalizations productively when appropriate? This question has long been seen as one that is both particularly central to acquisition research and particularly challenging. Focussing on causative overgeneralization errors of this type, a previous study reported a computational model that learns, on the basis of corpus data and human-derived verb-semantic-feature ratings, to predict adults’ by-verb preferences for less- versus more-transparent causative forms (e.g., * The clown laughed the man vs The clown made the man laugh) across English, Hebrew, Hindi, Japanese and K’iche Mayan. Here, we tested the ability of this model (and an expanded version with multiple hidden layers) to explain binary grammaticality judgment data from children aged 4;0-5;0, and elicited-production data from children aged 4;0-5;0 and 5;6-6;6 (N=48 per language). In general, the model successfully simulated both children’s judgment and production data, with correlations of r=0.5-0.6 and r=0.75-0.85, respectively, and also generalized to unseen verbs. Importantly, learners of all five languages showed some evidence of making the types of overgeneralization errors – in both judgments and production – previously observed in naturalistic studies of English (e.g., *I’m dancing it). Together with previous findings, the present study demonstrates that a simple learning model can explain (a) adults’ continuous judgment data, (b) children’s binary judgment data and (c) children’s production data (with no training of these datasets), and therefore constitutes a plausible mechanistic account of the acquisition of verbs’ argument structure restrictions.


Author(s):  
Yan Li ◽  
Lei Yan

Abstract This study investigates the effects of the use of explicit negative evidence in teaching on students’ perception of two types of ungrammatical Chinese sentences in which –le should not be used. Two groups of advanced learners of Chinese were pre-tested immediately before receiving instruction that included explicit negative evidence about the use of -le, and post-tested twice: once directly after the completion of the instruction, and again four weeks later, using a grammaticality judgment test. The results of the grammaticality judgment test indicated that including explicit negative evidence in teaching helps advanced English-speaking learners of Chinese identify sentences in which –le is used incorrectly. The implication is that including negative evidence in teaching can reduce errors caused by negative transfer from a student’s native language.


2021 ◽  
Vol 1 ◽  
pp. 1
Author(s):  
Ben Ambridge ◽  
Laura Doherty ◽  
Ramya Maitreyee ◽  
Tomoko Tatsumi ◽  
Shira Zicherman ◽  
...  

How do language learners avoid the production of verb argument structure overgeneralization errors (*The clown laughed the man c.f. The clown made the man laugh), while retaining the ability to apply such generalizations productively when appropriate? This question has long been seen as one that is both particularly central to acquisition research and particularly challenging. Focussing on causative overgeneralization errors of this type, a previous study reported a computational model that learns, on the basis of corpus data and human-derived verb-semantic-feature ratings, to predict adults’ by-verb preferences for less- versus more-transparent causative forms (e.g., *The clown laughed the man vs The clown made the man laugh) across English, Hebrew, Hindi, Japanese and K’iche Mayan. Here, we tested the ability of this model to explain binary grammaticality judgment data from children aged 4;0-5;0, and elicited-production data from children aged 4;0-5;0 and 5;6-6;6 (N=48 per language). In general, the model successfully simulated both children’s judgment and production data, with correlations of r=0.5-0.6 and r=0.75-0.85, respectively, and also generalized to unseen verbs. Importantly, learners of all five languages showed some evidence of making the types of overgeneralization errors – in both judgments and production – previously observed in naturalistic studies of English (e.g., *I’m dancing it). Together with previous findings, the present study demonstrates that a simple discriminative learning model can explain (a) adults’ continuous judgment data, (b) children’s binary judgment data and (c) children’s production data (with no training of these datasets), and therefore constitutes a plausible mechanistic account of the retreat from overgeneralization.


1998 ◽  
Author(s):  
Craig R. M. McKenzie ◽  
Susanna M. Lee ◽  
Karen K. Chen
Keyword(s):  

2003 ◽  
Vol 141-142 ◽  
pp. 301-344 ◽  
Author(s):  
Teresa Pica ◽  
Gay N. Washburn

This study sought to identify and describe how negative evidence was made available and accessible in responses to learners during two classroom activities: a teacher-led discussion, which emphasized communication of subject matter content, and a teacher-led sentence construction exercise, which focused on application of grammatical rules. Data came from adult, pre-academic English language learners during six discussions of American film and literature, and six sets of sentence construction exercises. Findings revealed little availability of negative evidence in the discussions, as students' fluent, multi-error contributions drew responses that were primarily back-channels and continuation moves. Greater availability and accessibility of negative evidence were found in the sentence construction exercises, as students were given feedback following their completion of individual sentences. Results from the study suggested several pedagogical implications and applications.


Complacency potential is an important measure to avoid performance error, such as neglecting to detect a system failure. This study updates and expands upon Singh, Molloy, and Parasuraman’s 1993 Complacency-Potential Rating Scale (CPRS). We updated and expanded the CPRS questions to include technology commonly used today and how frequently the technology is used. The goal of our study was to update the scale, analyze for factor shifts and internal consistency, and to explore correlations between the individual values for each factor and the frequency of use questions. We hypothesized that the factors would not shift from the original and the revised CPRS’s four subscales. Our research found that the revised CPRS consisted of only three subscales with the following Cronbach’s Alpha values: Confidence: 0.599, Safety/Reliability: 0.534, and Trust: 0.201. Correlations between the subscales and the revised complacency-potential and the frequency of use questions are also discussed.


Sign in / Sign up

Export Citation Format

Share Document