The Oxford Handbook of Universal Grammar
Latest Publications


TOTAL DOCUMENTS

21
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By Oxford University Press

9780199573776

Author(s):  
Maria-Teresa Guasti

Humans acquire language naturally, namely without specific instruction, by being exposed to it and by interacting with other human beings. According to the generativist enterprise, humans are endowed with a system of knowledge on the form of possible human languages (Universal Grammar). Evidence consistent with this assumption is provided in the chapter, by illustrating crucial phenomena ranging from the acquisition of phonology, to morphosyntax, syntax, formal semantics, and pragmatics. Infants’ brain organization is tuned to speech stimuli and presents left hemisphere specialization, from the first days of life, if not already in the mother’s womb. Infants set apart languages at two days of age, based on durational or rhythmic properties. Toddlers combine words by respecting the basic word order and are very sensitive to the hierarchical organization of sentences, both when it comes to the syntactic structure and when it comes to the interpretation of sentences.


Author(s):  
Peter Ludlow

Universal Grammar (UG) is a parametric system that establishes individual norms for language users. This is the sense in which UG is part of a competence theory instead of a performance theory. By establishing individual norms it does not control our linguistic behavior but acts as a kind of regulator that warns us when we diverge from an optimal set-point value. These warnings take the form of linguistic judgments. These linguistic judgments are surfacey—they don’t tell us what rules have been violated; they merely tell us when something is amiss, or which of two forms is more optimal. They are merely judgments of acceptability. While this approach leads us into the rule-following arguments of Wittgenstein and Kripke, those arguments can be defused if handled with care.


Author(s):  
Eric Fuß

The present chapter outlines a research program for historical linguistics based on the idea that the object of the formal study of language change should be defined as grammar change, that is, a set of discrete differences between the target grammar and the grammar acquired by the learner (Hale 2007). This approach is shown to offer new answers to some classical problems of historical linguistics (Weinreich et al. 1968), concerning, specifically, the actuation of changes and the observation that the transition from one historical state to another proceeds gradually. It is argued that learners are highly sensitive to small fluctuations in the linguistic input they receive, making change inevitable, while the impression of gradualness is linked to independent factors (diffusion in a speech community, and grammar competition). Special attention is paid to grammaticalization phenomena, which offer insights into the nature of functional categories, the building blocks of clause structure.


Author(s):  
Luigi Rizzi

This chapter illustrates the technical notion of ‘explanatory adequacy’ in the context of the other forms of empirical adequacy envisaged in the history of generative grammar: an analysis of a linguistic phenomenon is said to meet ‘explanatory adequacy’ when it comes with a reasonable account of how the phenomenon is acquired by the language learner. It discusses the relevance of arguments from the poverty of the stimulus, which bear on the complexity of the task that every language learner successfully accomplishes, and therefore define critical cases for evaluating the explanatory adequacy of a linguistic analysis. After illustrating the impact that parametric models had on the possibility of achieving explanatory adequacy on a large scale, the chapter addresses the role that explanatory adequacy plays in the context of the Minimalist Program, and the interplay that the concept has with the further explanation ‘beyond explanatory adequacy’ that minimalist analysis seeks.


Author(s):  
George Tsoulas

This chapter considers the part semantics plays in Universal Grammar. The core cases considered incude principles of semantic computation, the realisation of arguments and the place of truth and reference, especially with respect to anaphora. The chapter highlights the cenral part of Merge in the construction of structure and meaning and extends it to the construction of lexical items and the lexicon where attention is paid not only to the way lexical items are constructed (by Merge) but also the ways in which they are taken apart in derivations again by Merge (self-Merge).


Author(s):  
Wolfram Hinzen

This article explores the relationship between universal grammar and the philosophy of mind. It first provides an overview of the philosophy of mind, focusing on its basic metaphysical orientation as well as its concern with mental states. It then considers some basic paradigms in the philosophy of mind and what generative grammar had to contribute to these paradigms, which include behaviourism, eliminative materialism, anomalous monism, instrumentalism, and functionalism. It also discusses what we might call the ‘philosophy of generative grammar,’ and especially foundational assumptions in generative grammar, and examines what the linguistic contribution to the philosophy of mind has been. The article concludes by reflecting on the future and outlining current visions for where and how linguistics might prove to have a transformative influence on philosophy.


Author(s):  
Brett Miller ◽  
Neil Myler ◽  
Bert Vaux

This chapter draws a distinction between Universal Grammar (the initial state of the computational system that underwrites the human capacity for language) and the Language Acquisition Device (the complex of components of the mind/brain involved in constructing grammar+lexicon pairs upon exposure to primary linguistic data). It then considers whether there are any substantive phonological components of Universal Grammar strictu sensu. Two of the strongest empirical arguments for the existence of such phonological content in UG have been (i) apparent constraints on the space of variation induced from the typological record, and (ii) apparently universal dispreferences against certain phonological configurations (known as markedness). The chapter examines these arguments in the light of recent literature, concluding that the phenomena submit at least as well to historical, phonetic, or other non-UG explanations. We suggest that language acquisition experiments, involving natural and artificial languages, may be a more fruitful domain for future research into these questions.


Author(s):  
Terje Lohndal ◽  
Juan Uriagereka

This chapter discusses factors determining the structure of an I-language: genetic endowment, input/exposure to language, and principles not specific to language. The latter have become known as ‘third factors,’ which are argued to be principles that contribute to shaping the structure of grammars but that are not specific to language. Computational efficiency is one example of such a principle that has been suggested. In this chapter, the historical roots of the third factor perspective is traced and discussed. The third factor perspective in linguistics is also compared to a similar perspective in comparative biology outlined by the late Stephen Jay Gould. After a review of a few examples of what plausible third factors may be, the chapter ends with a discussion of the complex task of determining whether a given linguistic condition may be a third factor.


Author(s):  
Anders Holmberg

Linguistic typology is a research program that aims to describe and understand linguistic variation, distinguishing between properties which are shared across languages for historical reasons and properties shared for other reasons to do with ‘the nature of language.’ The preferred method is comparison of very large numbers of languages, sampled so as to control for genealogical and areal biases. The preferred mode of explanation is in terms of functional rather than formal notions. This chapter discusses the history of the research program, from Greenbergian universals to the present-day greater focus on probabilistic correlations between linguistic properties, with particular attention given to areal features. Some problems and shortcomings of this generally very successful research program are discussed, including problems of methodology and use of data, with special focus on its flagship WALS database. The relation between typology and generative linguistics and their relation to universal grammar is discussed.


Author(s):  
Bridget D. Samuels ◽  
Marc Hauser ◽  
Cedric Boeckx

Do animals have Universal Grammar? The short answer must be ‘no.’ Otherwise, why do human children learn language with strikingly little conscious effort, while no other animal has even come close to approximating human language, even with extensive training or exposure to massive linguistic input? However, many of the cognitive capacities which clearly serve our linguistic ability—rich conceptual systems, vocal imitation, categorical perception, and so on—are shared with other species, including some of our closest living relatives. This suggests that the question is more complicated than it might first appear. In the present work, we use phonology as a case study to show what type of cross-species evidence may bear—now and in future work—on the issue of whether animals have various components of UG, which we construe here broadly as the systems that are recruited by language but need not be specific to it.


Sign in / Sign up

Export Citation Format

Share Document