artificial language learning
Recently Published Documents


TOTAL DOCUMENTS

94
(FIVE YEARS 37)

H-INDEX

15
(FIVE YEARS 3)

Author(s):  
Mora Maldonado ◽  
Jennifer Culbertson

AbstractLanguages vary with respect to whether sentences with two negative elements give rise to double negation or negative concord meanings. We explore an influential hypothesis about what governs this variation: namely, that whether a language exhibits double negation or negative concord is partly determined by the phonological and syntactic nature of its negative marker (Zeijlstra 2004; Jespersen 1917). For example, one version of this hypothesis argues that languages with affixal negation must be negative concord (Zeijlstra 2008). We use an artificial language learning experiment to investigate whether English speakers are sensitive to the status of the negative marker when learning double negation and negative concord languages. Our findings fail to provide evidence supporting this hypothesised connection. Instead, our results suggest that learners find it easier to learn negative concord languages compared to double negation languages independently of whether the negative marker is an adverb or an affix. This is in line with evidence from natural language acquisition (Thornton et al. 2016).


2021 ◽  
Author(s):  
Chris Foster ◽  
Chad C. Williams ◽  
Olave E. Krigolson ◽  
Alona Fyshe

2021 ◽  
Author(s):  
Daoxin Li ◽  
Kathryn Schuler

Languages differ regarding the depth, structure, and syntactic domains of recursive structures. Even within a single language, some structures allow infinite self-embedding while others are more restricted. For example, English allows infinite free embedding of the prenominal genitive -s, whereas the postnominal genitive of is largely restricted to only one level and to a limited set of items. Therefore, while the ability for recursion is considered as a crucial part of the language faculty, speakers need to learn from experience which specific structures allow free embedding and which do not. One effort to account for the mechanism that underlies this learning process, the distributional learning proposal, suggests that the recursion of a structure (e.g. X1’s-X2) is licensed if the X1 position and the X2 position are productively substitutable in the input. A series of corpus studies have confirmed the availability of such distributional cues in child directed speech. The present study further tests the distributional learning proposal with an artificial language learning experiment. We found that, as predicted, participants exposed to productive input were more likely to accept unattested strings at both one and two-embedding levels than participants exposed to unproductive input. Therefore, our results suggest that speakers can indeed use distributional information at one level to learn whether or not a structure is freely recursive.


2021 ◽  
Author(s):  
R. Thomas McCoy ◽  
Jennifer Culbertson ◽  
Paul Smolensky ◽  
Géraldine Legendre

Human language is often assumed to make "infinite use of finite means" - that is, to generate an infinite number of possible utterances from a finite number of building blocks. From an acquisition perspective, this assumed property of language is interesting because learners must acquire their languages from a finite number of examples. To acquire an infinite language, learners must therefore generalize beyond the finite bounds of the linguistic data they have observed. In this work, we use an artificial language learning experiment to investigate whether people generalize in this way. We train participants on sequences from a simple grammar featuring center embedding, where the training sequences have at most two levels of embedding, and then evaluate whether participants accept sequences of a greater depth of embedding. We find that, when participants learn the pattern for sequences of the sizes they have observed, they also extrapolate it to sequences with a greater depth of embedding. These results support the hypothesis that the learning biases of humans favor languages with an infinite generative capacity.


Author(s):  
Youngah Do ◽  
Jonathan Havenhill

The role of inductive biases has been actively examined in work on phonological learning. While previous studies systematically supported a structural bias hypothesis, i.e., patterns with simpler phonological featural descriptions are easier to learn, the results have been mixed for a substantive bias hypothesis, i.e., phonetically motivated patterns are easier to learn. This study explores an explanation for the uncertain status of substantive bias in phonological learning. Among the aspects of phonetic substance, we focus on articulatory factors. We hypothesize that practice producing phonological patterns makes salient to learners the articulatory factors underlying articulatorily (un-)grounded patterns. An artificial language learning experiment was conducted to test the learning of postnasal (de)voicing, a pattern which is primarily grounded on articulatory components. We examine the role of production in the learning of articulatorily grounded (postnasal voicing) vs. ungrounded patterns (postnasal devoicing), by comparing the outcomes of perception-only vs. perception-with-production learning contexts, both in categorical and variable pattern learning conditions. The results show evidence for a production effect, but it was restricted to certain contexts, namely those involving a higher level of uncertainty and for languages exhibiting dominant natural patterns. We discuss the implications of our findings for phonological learning and language change.


2021 ◽  
Author(s):  
Giulia Bovolenta ◽  
Emma Marsden

Prediction error is known to enhance priming effects for familiar syntactic structures; it also strengthens the formation of new declarative memories. Here, we investigate whether violating expectations may aid the acquisition of new abstract syntactic structures, too, by enhancing memory for individual instances which can then form the basis for abstraction. In a cross-situational artificial language learning paradigm, participants were exposed to novel syntactic structures in ways that either violated their expectations (Surprisal group) or that conformed to them (Control group). In a delayed post-test, participants were tested on their structural knowledge both by means of structure test trials (cross-situational learning trials focusing on the active / passive distinction, with both familiar and novel verbs), and by a grammaticality judgment task. Participants in the Surprisal group were significantly more accurate than Control on the structure test trials using novel verbs and in the grammaticality judgment task, suggesting they had developed stronger abstract structural knowledge and were better at generalising it to novel instances, even though they were not significantly more likely to become aware of the functional distinction between the two structures.


Author(s):  
Merel Muylle ◽  
Sarah Bernolet ◽  
Robert J. Hartsuiker

Abstract We investigated L1 and L2 frequency effects in the sharing of syntax across languages (reflected in cross-linguistic structural priming) using an artificial language (AL) paradigm. Ninety-six Dutch speakers learned an AL with either a prepositional-object (PO) dative bias (PO datives appeared three times as often as double-object [DO] datives) or a DO dative bias (DOs appeared three times as often as POs). Priming was assessed from the AL to Dutch (a strongly PO-biased language). There was weak immediate priming for DOs, but not for POs in both bias conditions. This suggests that L1, but not AL, frequency influenced immediate priming. Furthermore, the DO bias group produced 10% more DOs in Dutch than the PO bias group, showing that cumulative priming was influenced by AL frequency. We discuss the different effects of L1 and AL frequency on cross-linguistic structural priming in terms of lexicalist and implicit learning accounts.


2021 ◽  
Vol 6 (1) ◽  
pp. 92
Author(s):  
Sara Finley

The representations of transparent vowels in vowel harmony have been of interest to phonologists because of the challenges they pose for constraints on locality and complexity. One proposal is that transparent vowels in back vowel harmony may be intermediate between front and back. The present study uses two artificial language learning experiments to explore the psychological reality of acoustic differences in transparent vowels in back vs. front vowel contexts. Participants were exposed to a back/round vowel harmony language with a neutral vowel that was spliced so that the F2 was lower in back vowel contexts and higher in front vowel contexts (the Natural condition) or the reverse (the Unnatural condition). While only participants in the Natural condition of Experiment 1 were able to learn the behavior of the transparent vowel relative to a No-Training control, there was no difference between the Natural and Unnatural conditions. In Experiment 2, only participants in the Natural condition learned the vowel harmony pattern, though there were no significant differences between the two conditions. No condition successfully learned the behavior of the transparent vowel in Experiment 2. These results suggest that the effects of small differences in the F2 value of transparent back vowels on learnability are minimal.


2021 ◽  
Vol 12 ◽  
Author(s):  
Theresa Matzinger ◽  
Nikolaus Ritt ◽  
W. Tecumseh Fitch

A prerequisite for spoken language learning is segmenting continuous speech into words. Amongst many possible cues to identify word boundaries, listeners can use both transitional probabilities between syllables and various prosodic cues. However, the relative importance of these cues remains unclear, and previous experiments have not directly compared the effects of contrasting multiple prosodic cues. We used artificial language learning experiments, where native German speaking participants extracted meaningless trisyllabic “words” from a continuous speech stream, to evaluate these factors. We compared a baseline condition (statistical cues only) to five test conditions, in which word-final syllables were either (a) followed by a pause, (b) lengthened, (c) shortened, (d) changed to a lower pitch, or (e) changed to a higher pitch. To evaluate robustness and generality we used three tasks varying in difficulty. Overall, pauses and final lengthening were perceived as converging with the statistical cues and facilitated speech segmentation, with pauses helping most. Final-syllable shortening hindered baseline speech segmentation, indicating that when cues conflict, prosodic cues can override statistical cues. Surprisingly, pitch cues had little effect, suggesting that duration may be more relevant for speech segmentation than pitch in our study context. We discuss our findings with regard to the contribution to speech segmentation of language-universal boundary cues vs. language-specific stress patterns.


The volume deals with the multifaceted nature of morphological complexity understood as a composite rather than unitary phenomenon as it shows an amazing degree of crosslinguistic variation. It features an Introduction by the editors that critically discusses some of the foundational assumptions informing contemporary views on morphological complexity, eleven chapters authored by an excellent set of contributors, and a concluding chapter by Östen Dahl that reviews various approaches to morphological complexity addressed in the preceding contributions and focuses on the minimum description length approach. The central eleven chapters approach morphological complexity from different perspectives, including the language-particular, the crosslinguistic, and the acquisitional one, and offer insights into issues such as the quantification of morphological complexity, its syntagmatic vs. paradigmatic aspects, diachronic developments including the emergence and acquisition of complexity, and the relations between morphological complexity and socioecological parameters of language. The empirical evidence includes data from both better-known languages such as Russian, and lesser-known and underdescribed languages from Africa, Australia, and the Americas, as well as experimental data drawn from iterated artificial language learning.


Sign in / Sign up

Export Citation Format

Share Document