THE MINIMALIST PROGRAM.Noam Chomsky. Cambridge, MA: MIT Press, 1994. Pp. 420. $22.50 cloth.

1997 ◽  
Vol 19 (1) ◽  
pp. 121-123 ◽  
Author(s):  
Alan Munn

This most recent exposition of Chomsky's ideas about the language faculty strives to reach a deeper level of explanatory adequacy. Rather than the question of “What does knowledge of language consist of?” Chomsky asks the question “Why is the language faculty the way it is?” His basic answer to this question is the following: Two sorts of conditions are imposed on the language faculty, conditions arising from its place in the cognitive architecture “bare output conditions” and conditions of conceptual naturalness such as economy, simplicity, and nonredundancy. Minimalism is thus a call for theoretical simplicity with respect to the constructs used to explain language phenomena: “It is all too easy to succumb to the temptation to offer purported explanation for some phenomenon on the basis of assumptions that are roughly of the order of complexity of what is to be explained. . . . Minimalist demands at least have the merit of highlighting such moves, thus sharpening the question of whether we have a genuine explanation or a restatement of the problem in other terms” (pp. 234–235).

1999 ◽  
Vol 22 (6) ◽  
pp. 991-1013 ◽  
Author(s):  
Harald Clahsen

Following much work in linguistic theory, it is hypothesized that the language faculty has a modular structure and consists of two basic components, a lexicon of (structured) entries and a computational system of combinatorial operations to form larger linguistic expressions from lexical entries. This target article provides evidence for the dual nature of the language faculty by describing recent results of a multidisciplinary investigation of German inflection. We have examined: (1) its linguistic representation, focussing on noun plurals and verb inflection (participles), (2) processes involved in the way adults produce and comprehend inflected words, (3) brain potentials generated during the processing of inflected words, and (4) the way children acquire and use inflection. It will be shown that the evidence from all these sources converges and supports the distinction between lexical entries and combinatorial operations.Our experimental results indicate that adults have access to two distinct processing routes, one accessing (irregularly) inflected entries from the mental lexicon and another involving morphological decomposition of (regularly) inflected words into stem+affix representations. These two processing routes correspond to the dual structure of the linguistic system. Results from event-related potentials confirm this linguistic distinction at the level of brain structures. In children's language, we have also found these two processes to be clearly dissociated; regular and irregular inflection are used under different circumstances, and the constraints under which children apply them are identical to those of the adult linguistic system.Our findings will be explained in terms of a linguistic model that maintains the distinction between the lexicon and the computational system but replaces the traditional view of the lexicon as a simple list of idiosyncrasies with the notion of internally structured lexical representations.


2021 ◽  
Vol 6 (4) ◽  
pp. 36-42
Author(s):  
Sneha Kannusamy

This research paper sheds light on the reformation of culture through language and translation. It introduces the definitions of language, culture, and translation.  It further explains the relationship between culture, language, and translation concerning the scholarly papers. The phenomenon by which the culture is built by different languages linking to the way we emote feelings and thoughts, which is achieved through the process of translation. This paper shows the study of how the culture gets reformed through language and translation getting even more transformed structurally in the upcoming generations. The reformation is seen not only in non-fictional works but also traces the fictional plays and novels that are cited with authentic references. Limitations such as not translating the words with accurate meaning may give the pessimistic approach but how it promotes people in learning varied concepts of language getting introduced to vast culture is dealt detail. This paper also deals with cultural refinement through linguistic anthropology and postcolonialism. This study shows the level of consciousness of people towards language and translation giving allowance to get introduced to particular cultures that promotes unity with examples. The language reflects culture, providing the study of refinement in language mirroring the culture, hence proving literacy is directly connected with the culture in education. The translation is the best influencer taking its turn of shifting people from one culture to taste another. To build up a valid society, the need for inculcating in-depth knowledge of language and culture through translation helps in building the culture for posterity.


Author(s):  
Norbert Hornstein

Fish swim, birds fly, people talk. The talents displayed by fish and birds rest on specific biological structures whose intricate detail is attributable to genetic endowment. Human linguistic capacity similarly rests on dedicated mental structures many of whose specific details are an innate biological endowment of the species. One of Chomsky’s central concerns has been to press this analogy and uncover its implications for theories of mind, meaning and knowledge. This work has proceeded along two broad fronts. First, Chomsky has fundamentally restructured grammatical research. Due to his work, the central object of study in linguistics is ‘the language faculty’, a postulated mental organ which is dedicated to acquiring linguistic knowledge and is involved in various aspects of language-use, including the production and understanding of utterances. The aim of linguistic theory is to describe the initial state of this faculty and how it changes with exposure to linguistic data. Chomsky (1981) characterizes the initial state of the language faculty as a set of principles and parameters. Language acquisition consists in setting these open parameter values on the basis of linguistic data available to a child. The initial state of the system is a Universal Grammar (UG): a super-recipe for concocting language-specific grammars. Grammars constitute the knowledge of particular languages that result when parametric values are fixed. Linguistic theory, given these views, has a double mission. First, it aims to characterize the grammars (and hence the mental states) attained by native speakers. Theories are ‘descriptively adequate’ if they attain this goal. In addition, linguistic theory aims to explain how grammatical competence is attained. Theories are ‘explanatorily adequate’ if they show how descriptively adequate grammars can arise on the basis of exposure to ‘primary linguistic data’ (PLD): the data children are exposed to and use in attaining their native grammars. Explanatory adequacy rests on an articulated theory of UG, and in particular a detailed theory of the general principles and open parameters that characterize the initial state of the language faculty (that is, the biologically endowed mental structures). Since the mid- 1990s Chomsky has emphasized a third mission: to explain how the capacity for language could have arisen in the species. Chomsky (2004) has described theories that address this third concern as going "beyond explanatory adequacy," meaning that they not only attain explanatory adequacy, but also provide a plausible path for the emergence in humans of the "Faculty of Language" (the name given to whatever it is that allows humans to acquire language in the way that they do). Chomsky has also pursued a second set of concerns. He has vigorously criticized many philosophical nostrums from the perspective of this revitalized approach to linguistics. Three topics he has consistently returned to are: - Knowledge of language and its general epistemological implications - Indeterminacy and underdetermination in linguistic theory - Person-specific ‘I-languages’ versus socially constituted ‘E-languages’ as the proper objects of scientific study.


Author(s):  
Amy Kind

Imagination is a speculative mental state that allows us to consider situations apart from the here and now. Historically, imagination played an important role in the works of many of the major philosophical figures in the Western tradition – from Aristotle to Descartes to Hume to Kant. By the middle of the twentieth century, in the wake of the behavioristic mindset that had dominated both psychology and philosophy in the early part of the century, imagination had largely faded from philosophical view and received scant attention from the 1960s through the 1980s. But imagination returned to the limelight in the late twentieth century, as it was given increasing prominence in both aesthetics and philosophy of mind. In aesthetics, interest in imagination derives in large part from its role in our engagement with works of art, music, and literature. For example, some philosophers have called upon imagination to capture the essence of fiction, while others have called upon it to explain how listeners understand the expressive nature of musical works. Yet others have seen imagination as centrally involved in ontological questions about art; in particular, they take works of art to be best understood as in some sense imaginary objects. In philosophy of mind, imagination plays an especially important role in discussions of mindreading, that is, our ability to understand the mental states of others. While theory theorists claim that we do this by calling upon a folk theory of mind, simulation theorists claim that we mindread by simulating the mental states of others – with simulation typically cashed out in terms of imagination. More generally, philosophers of mind who are interested in questions of cognitive architecture tend to be especially interested in imagination and its relationship to belief and desire. In fact, imagination has come to play an important role in a wide variety of philosophical contexts in addition to aesthetics and philosophy of mind. It has traditionally been central to discussions of thought experimentation and modal epistemology, where an analogy is often drawn between the way perception justifies beliefs about actuality and the way imagination seems to justify beliefs about possibility. Imagination has also been invoked to explain pretence, dreaming, empathy, delusion, and our ability to engage in counterfactual reasoning.


2021 ◽  
Vol 12 ◽  
Author(s):  
Hans Buffart ◽  
Haike Jacobs

The fact that human language is highly structured and that, moreover, the way it is structured shows striking similarities in the world’s languages has been addressed from two different perspectives. The first, and more traditional, generative hypothesis is that the similarities are due to an innate language faculty. There is an inborn ‘grammar’ with universal principles that manifest themselves in each language and cross-linguistic variation arises due to a different parameter setting of universal principles. A second perspective is that there is no inborn, innate language faculty, but that instead structure emerges from language usage. This paper purports to develop and illustrate a third perspective, according to which the structural similarities in human languages are the result of the way the cognitive system works in perception. The essential claim is that structural properties follow from the limitations of human cognition in focus.


2022 ◽  
Vol 4 ◽  
Author(s):  
Neil Cohn ◽  
Joost Schilperoord

Language is typically embedded in multimodal communication, yet models of linguistic competence do not often incorporate this complexity. Meanwhile, speech, gesture, and/or pictures are each considered as indivisible components of multimodal messages. Here, we argue that multimodality should not be characterized by whole interacting behaviors, but by interactions of similar substructures which permeate across expressive behaviors. These structures comprise a unified architecture and align within Jackendoff's Parallel Architecture: a modality, meaning, and grammar. Because this tripartite architecture persists across modalities, interactions can manifest within each of these substructures. Interactions between modalities alone create correspondences in time (ex. speech with gesture) or space (ex. writing with pictures) of the sensory signals, while multimodal meaning-making balances how modalities carry “semantic weight” for the gist of the whole expression. Here we focus primarily on interactions between grammars, which contrast across two variables: symmetry, related to the complexity of the grammars, and allocation, related to the relative independence of interacting grammars. While independent allocations keep grammars separate, substitutive allocation inserts expressions from one grammar into those of another. We show that substitution operates in interactions between all three natural modalities (vocal, bodily, graphic), and also in unimodal contexts within and between languages, as in codeswitching. Altogether, we argue that unimodal and multimodal expressions arise as emergent interactive states from a unified cognitive architecture, heralding a reconsideration of the “language faculty” itself.


Author(s):  
Irina Monich

Tone is indispensable for understanding many morphological systems of the world. Tonal phenomena may serve the morphological needs of a language in a variety of ways: segmental affixes may be specified for tone just like roots are; affixes may have purely tonal exponents that associate to segmental material provided by other morphemes; affixes may consist of tonal melodies, or “templates”; and tonal processes may apply in a way that is sensitive to morphosyntactic boundaries, delineating word-internal structure. Two behaviors set tonal morphemes apart from other kinds of affixes: their mobility and their ability to apply phrasally (i.e., beyond the limits of the word). Both floating tones and tonal templates can apply to words that are either phonologically grouped with the word containing the tonal morpheme or syntactically dependent on it. Problems generally associated with featural morphology are even more acute in regard to tonal morphology because of the vast diversity of tonal phenomena and the versatility with which the human language faculty puts pitch to use. The ambiguity associated with assigning a proper role to tone in a given morphological system necessitates placing further constraints on our theory of grammar. Perhaps more than any other morphological phenomena, grammatical tone exposes an inadequacy in our understanding both of the relationship between phonological and morphological modules of grammar and of the way that phonology may reference morphological information.


2020 ◽  
pp. 13-32
Author(s):  
Steven Gross

Linguistic intuitions are a central source of evidence across a variety of linguistic domains. They have also long been a source of controversy. This chapter aims to illuminate the etiology and evidential status of at least some linguistic intuitions by relating them to error signals of the sort posited by accounts of online monitoring of speech production and comprehension. The suggestion is framed as a novel reply to Michael Devitt’s claim that linguistic intuitions are theory-laden “central systems” responses rather than endorsed outputs of a modularized language faculty (the “Voice of Competence”). Along the way, it is argued that linguistic intuitions may not constitute a natural kind with a common etiology and that, for a range of cases, the process by which the intuitions used in linguistics are generated amounts to little more than comprehension.


Author(s):  
Ian Mackenzie ◽  
Wade Davis

Contemporary linguistics, preoccupied with syntax, has neglected the lexicon. Yet languages in general may diverge more fundamentally in respect to the lexicon than they do on the level of syntax. Such lexical divergences may result in real differences in the way distinct human groups think. When lexicosemantic divergence between two languages leads to a situation where a concept expressed in one language cannot be translated into another, we have a case of absolute untranslatability. Speakers of the two languages necessarily conceive the world in different ways. A new corpus of data collected from the Penan nomads of Borneo provides instances of absolute untranslatability between their language and English. The extinction of languages like Penan is a tragedy for science: not only are their lexicons the repositories of enormous amounts of cultural data, but their dissolution results in the loss of information that may shed light on the nature of the language faculty and human cognition in general.


2020 ◽  
Vol 129 (3) ◽  
pp. 323-393
Author(s):  
E. J. Green

A venerable view holds that a border between perception and cognition is built into our cognitive architecture and that this imposes limits on the way information can flow between them. While the deliverances of perception are freely available for use in reasoning and inference, there are strict constraints on information flow in the opposite direction. Despite its plausibility, this approach to the perception-cognition border has faced criticism in recent years. This article develops an updated version of the architectural approach, which I call the dimension restriction hypothesis (DRH). According to DRH, perceptual processes are constrained to compute over a bounded range of dimensions, while cognitive processes are not. This view allows that perception is cognitively penetrable, but places strict limits on the varieties of penetration that can occur. The article argues that DRH enjoys both theoretical and empirical support, and also defends the view against several objections.


Sign in / Sign up

Export Citation Format

Share Document