scholarly journals Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar

2021 ◽  
Author(s):  
R. Thomas McCoy ◽  
Jennifer Culbertson ◽  
Paul Smolensky ◽  
Géraldine Legendre

Human language is often assumed to make "infinite use of finite means" - that is, to generate an infinite number of possible utterances from a finite number of building blocks. From an acquisition perspective, this assumed property of language is interesting because learners must acquire their languages from a finite number of examples. To acquire an infinite language, learners must therefore generalize beyond the finite bounds of the linguistic data they have observed. In this work, we use an artificial language learning experiment to investigate whether people generalize in this way. We train participants on sequences from a simple grammar featuring center embedding, where the training sequences have at most two levels of embedding, and then evaluate whether participants accept sequences of a greater depth of embedding. We find that, when participants learn the pattern for sequences of the sizes they have observed, they also extrapolate it to sequences with a greater depth of embedding. These results support the hypothesis that the learning biases of humans favor languages with an infinite generative capacity.

Phonology ◽  
2019 ◽  
Vol 36 (4) ◽  
pp. 627-653
Author(s):  
Brandon Prickett

This study uses an artificial language learning experiment and computational modelling to test Kiparsky's claims about Maximal Utilisation and Transparency biases in phonological acquisition. A Maximal Utilisation bias would prefer phonological patterns in which all rules are maximally utilised, and a Transparency bias would prefer patterns that are not opaque. Results from the experiment suggest that these biases affect the learnability of specific parts of a language, with Maximal Utilisation affecting the acquisition of individual rules, and Transparency affecting the acquisition of rule orderings. Two models were used to simulate the experiment: an expectation-driven Harmonic Serialism learner and a sequence-to-sequence neural network. The results from these simulations show that both models’ learning is affected by these biases, suggesting that the biases emerge from the learning process rather than any explicit structure built into the model.


2019 ◽  
Vol 4 (2) ◽  
pp. 83-107 ◽  
Author(s):  
Carmen Saldana ◽  
Simon Kirby ◽  
Robert Truswell ◽  
Kenny Smith

AbstractCompositional hierarchical structure is a prerequisite for productive languages; it allows language learners to express and understand an infinity of meanings from finite sources (i.e., a lexicon and a grammar). Understanding how such structure evolved is central to evolutionary linguistics. Previous work combining artificial language learning and iterated learning techniques has shown how basic compositional structure can evolve from the trade-off between learnability and expressivity pressures at play in language transmission. In the present study we show, across two experiments, how the same mechanisms involved in the evolution of basic compositionality can also lead to the evolution of compositional hierarchical structure. We thus provide experimental evidence showing that cultural transmission allows advantages of compositional hierarchical structure in language learning and use to permeate language as a system of behaviour.


2020 ◽  
Author(s):  
Mora Maldonado ◽  
Carmen Saldana ◽  
Jennifer Culbertson

The idea that universal representations of hierarchical structure constrain patterns of linear order is a central to many linguistic theories. In this paper we use Artificial Language Learning techniques to experimentally probe this claim. Specifically, we investigate how a hypothesized hierarchy of φ-features impacts the linearization of person and number affixes by (English-speaking) learners in the lab.


2018 ◽  
Author(s):  
Carmen Saldana ◽  
Simon Kirby ◽  
Rob Truswell ◽  
Kenny Smith

Compositional hierarchical structure is a prerequisite for productive languages; it allows language learners to express and understand an infinity of meanings from finite sources (i.e., a lexicon and a grammar). Understanding how such structure evolved is central to evolutionary linguistics. Previous work combining artificial language learning and iterated learning techniques has shown how basic compositional structure can evolve from the trade-off between learnability and expressivity pressures at play in language transmission. In the present study we show, across two experiments, how the same mechanisms involved in the evolution of basic compositionality can also lead to the evolution of compositional hierarchical structure. We thus provide experimental evidence showing that cultural transmission allows advantages of compositional hierarchical structure in language learning and use to permeate language as a system of behaviour.


Phonology ◽  
2020 ◽  
Vol 37 (1) ◽  
pp. 65-90 ◽  
Author(s):  
Alexander Martin ◽  
Sharon Peperkamp

Substance-based phonological theories predict that a preference for phonetically natural rules (those which reflect constraints on speech production and perception) is encoded in synchronic grammars, and translates into learning biases. Some previous work has shown evidence for such biases, but methodological concerns with these studies mean that the question warrants further investigation. We revisit this issue by focusing on the learning of palatal vowel harmony (phonetically natural) compared to disharmony (phonetically unnatural). In addition, we investigate the role of memory consolidation during sleep on rule learning. We use an artificial language learning paradigm with two test phases separated by twelve hours. We observe a robust effect of phonetic naturalness: vowel harmony is learned better than vowel disharmony. For both rules, performance remains stable after twelve hours, regardless of the presence or absence of sleep.


Phonology ◽  
2014 ◽  
Vol 31 (3) ◽  
pp. 399-433 ◽  
Author(s):  
Scott Myers ◽  
Jaye Padgett

Many languages have restrictions on word-final segments, such as a requirement that any word-final obstruent be voiceless. There is a phonetic basis for such restrictions at the ends of utterances, but not the ends of words. Historical linguists have long noted this mismatch, and have attributed it to an analogical generalisation of such restrictions from utterance-final to word-final position. To test whether language learners actually generalise in this way, two artificial language learning experiments were conducted. Participants heard nonsense utterances in which there was a restriction on utterance-final obstruents, but in which no information was available about word-final utterance-medial obstruents. They were then tested on utterances that included obstruents in both positions. They learned the pattern and generalised it to word-final utterance-medial position, confirming that learners are biased toward word-based distributional patterns.


2018 ◽  
Author(s):  
Carmen Saldana ◽  
Kenny Smith ◽  
Simon Kirby ◽  
Jennifer Culbertson

Languages exhibit variation at all linguistic levels, from phonology, to the lexicon, to syntax. Importantly, that variation tends to be (at least partially) conditioned on some aspect of the social or linguistic context. When variation is unconditioned, language learners regularise it—removing some or all variants, or conditioning variant use on context. Previous studies using artificial language learning experiments have documented regularising behaviour in learning of lexical, morphological, and syntactic variation. These studies implicitly assume that regularisation reflects uniform mechanisms and processes across linguistic levels. However, studies on natural language learning and pidginisation suggest that morphological and syntactic variation may be treated differently. In particular, there is evidence that morphological variation may be more susceptible to regularisation (Good 2015;Siegel 2006; Slobin 1986). Here we provide the first systematic comparison of the strength of regularisation across these two linguistic levels. In line with previous studies, we find that the presence of a favoured variant can induce different degrees of regularisation. However, when input languages are carefully matched—with comparable initial variability, and no variant-specific biases—regularisation can be comparable across morphology and word order. This is the case regard-less of whether the task is explicitly communicative. Overall, our findings suggest an overarching regularising mechanism at work, with apparent differences among levels likely due to differences in inherent complexity or variant-specific biases. Differences between production and encoding in our tasks further suggests this overarching mechanism is driven by production


2014 ◽  
Vol 41 (S1) ◽  
pp. 64-77 ◽  
Author(s):  
SUSAN GOLDIN-MEADOW

ABSTRACTYoung children are skilled language learners. They apply their skills to the language input they receive from their parents and, in this way, derive patterns that are statistically related to their input. But being an excellent statistical learner does not explain why children who are not exposed to usable linguistic input nevertheless communicate using systems containing the fundamental properties of language. Nor does it explain why learners sometimes alter the linguistic input to which they are exposed (input from either a natural or an artificial language). These observations suggest that children are prepared to learn language. Our task now, as it was in 1974, is to figure out what they are prepared with – to identify properties of language that are relatively easy to learn, the resilient properties, as well as properties of language that are more difficult to learn, the fragile properties. The new tools and paradigms for describing and explaining language learning that have been introduced into the field since 1974 offer great promise for accomplishing this task.


2018 ◽  
Author(s):  
Jennifer Culbertson ◽  
Hanna Jarvinen ◽  
Frances Haggarty ◽  
Kenny Smith

Previous research on the acquisition of noun classification systems (e.g., grammatical gender) has found that child learners rely disproportionately on phonological cues to determine the class of a new noun, even when competing semantic cues are more reliable in their language. Culbertson, Gagliardi, and Smith (2017) argue that this likely results from the early availability of phonological information during acquisition; learners base their initial representations on formal features of nouns, only later integrating semantic cues from noun meanings . Here, we use artificial language learning experiments to show that early availability drives cue use in children (67 year-olds). However, we also find evidence of developmental changes in sensitivity to semantics; when both cues types are simultaneously available, children are more likely to rely on phonology than adults. Our results suggest that early availability and a bias favoring phonological cues both contribute to children’s over-reliance on phonology in natural language acquisition.


Sign in / Sign up

Export Citation Format

Share Document