The Oxford Handbook of Psycholinguistics
Latest Publications


TOTAL DOCUMENTS

49
(FIVE YEARS 0)

H-INDEX

7
(FIVE YEARS 0)

Published By Oxford University Press

9780198568971

Author(s):  
David B. Pisoni ◽  
Susannah V. Levi

This article examines how new approaches—coupled with previous insights—provide a new framework for questions that deal with the nature of phonological and lexical knowledge and representation, processing of stimulus variability, and perceptual learning and adaptation. First, it outlines the traditional view of speech perception and identifies some problems with assuming such a view, in which only abstract representations exist. The article then discusses some new approaches to speech perception that retain detailed information in the representations. It also considers a view which rejects abstraction altogether, but shows that such a view has difficulty dealing with a range of linguistic phenomena. After providing a brief discussion of some new directions in linguistics that encode both detailed information and abstraction, the article concludes by discussing the coupling of speech perception and spoken word recognition.


Author(s):  
John C. Trueswell ◽  
Lila R. Gleitman

This article describes what is known about the adult end-state, namely, that the adult listener recovers the syntactic structure of an utterance in real-time via interactive probabilistic parsing procedures. It examines evidence indicating that similar mechanisms are at work quite early during language learning, such that infants and toddlers attempt to parse the speech stream probabilistically. In the case of learning, though, the parsing is in aid of discovering relevant lower-level linguistic formatives such as syllables and words. Experimental observations about child sentence-processing abilities are still quite sparse, owing in large part to the difficulty in applying adult experimental procedures to child participants; reaction time, reading, and linguistic judgement methods have all have been attempted with children. The article discusses real-time sentence processing in adults, experimental exploration of child sentence processing, eye movements during listening and the kindergarten-path effect, verb biases in syntactic ambiguity resolution, prosody and lexical biases in child parsing, parsing development in a head-final language, and the place of comprehension in a theory of language acquisition.


Author(s):  
Marta Kutas ◽  
Kara D. Federmeier

The intact human brain is the only known system that can interpret and respond to various visual and acoustic patterns. Therefore, unlike researchers of other cognitive phenomena, (neuro)psycholinguists cannot avail themselves of invasive techniques in non-human animals to uncover the responsible mechanisms in the large parts of the (human) brain that have been implicated in language processing. Engagement of these different anatomical areas does, however, generate distinct patterns of biological activity (such as ion flow across neural membranes) that can be recorded inside and outside the heads of humans as they quickly, often seamlessly, and without much conscious reflection on the computations and linguistic regularities involved, understand spoken, written, or signed sentences. This article summarizes studies of event-related brain potentials and sentence processing. It discusses electrophysiology, language and the brain, processing language meaning, context effects in meaning processing, non-literal language processing, processing language form, parsing, slow potentials and the closure positive shift, and plasticity and learning.


Author(s):  
Michael K. Tanenhaus

Recently, eye movements have become a widely used response measure for studying spoken language processing in both adults and children, in situations where participants comprehend and generate utterances about a circumscribed “Visual World” while fixation is monitored, typically using a free-view eye-tracker. Psycholinguists now use the Visual World eye-movement method to study both language production and language comprehension, in studies that run the gamut of current topics in language processing. Eye movements are a response measure of choice for addressing many classic questions about spoken language processing in psycholinguistics. This article reviews the burgeoning Visual World literature on language comprehension, highlighting some of the seminal studies and examining how the Visual World approach has contributed new insights to our understanding of spoken word recognition, parsing, reference resolution, and interactive conversation. It considers some of the methodological issues that come to the fore when psycholinguists use eye movements to examine spoken language comprehension.


Author(s):  
Sheila Blumstein

This article reviews current knowledge about the nature of auditory word recognition deficits in aphasia. It assumes that the language functioning of adults with aphasia was normal prior to sustaining brain injury, and that their word recognition system was intact. As a consequence, the study of aphasia provides insight into how damage to particular areas of the brain affects speech and language processing, and thus provides a crucial step in mapping out the neural systems underlying speech and language processing. To this end, much of the discussion focuses on word recognition deficits in Broca's and Wernicke's aphasics, two clinical syndromes that have provided the basis for much of the study of the neural basis of language. Clinically, Broca's aphasics have a profound expressive impairment in the face of relatively good auditory language comprehension. This article also considers deficits in processing the sound structure of language, graded activation of the lexicon, lexical competition, influence of word recognition on speech processing, and influence of sentential context on word recognition.


Author(s):  
Antje S. Meyer ◽  
Eva Belke

Current models of word form retrieval converge on central assumptions. They all distinguish between morphological, phonological, and phonetic representations and processes; they all assume morphological and phonological decomposition, and agree on the main processing units at these levels. In addition, all current models of word form postulate the same basic retrieval mechanisms: activation and selection of units. Models of word production often distinguish between processes concerning the selection of a single word unit from the mental lexicon and the retrieval of the associated word form. This article explores lexical selection and word form retrieval in language production. Following the distinctions in linguistic theory, it discusses morphological encoding, phonological encoding, and phonetic encoding. The article also considers the representation of phonological knowledge, building of phonological representations, segmental retrieval, retrieval of metrical information, generating the phonetic code of words, and a model of word form retrieval.


Author(s):  
Mark S. Seidenberg

Connectionist computational models have been extensively used in the study of reading: how children learn to read, skilled reading, and reading impairments (dyslexia). The models are computer programs that simulate detailed aspects of behaviour. This article provides an overview of connectionist models of reading, with an emphasis on the “triangle” framework. The term “connectionism” refers to a broad, varied set of ideas, loosely connected by an emphasis on the notion that complexity, at different grain sizes or scales ranging from neurons to overt behaviour, emerges from the aggregate behaviour of large networks of simple processing units. This article focuses on the parallel distributed processing variety developed by Rumelhart, McClelland, and Hinton (1986). First, it describes basic elements of connectionist models of reading: task orientation, distributed representations, learning, hidden units, and experience. The article then looks at how models are used to establish causal effects, along with quasiregularity and division of labor.


Author(s):  
Peter Indefrey

This article adopts the production model of Levelt to discuss brain imaging studies of continuous speech. Conclusions about the involvement of brain regions in processes of language production are mainly drawn on the basis of the presence or absence of processing components of speaking in certain experimental tasks. Such conclusions are largely theory independent, because differences between current models do not concern the assumed processing levels but the exact nature of the information flow between them. In a second step, the article tests some of these conclusions by comparing the few available data on activation time courses of brain regions and independent evidence on the timing of processes in language production. It also discusses brain regions involved in word production, conceptually driven lexical selection, phonological code (word form) retrieval, phonological encoding, phonetic encoding and articulation, self-monitoring, whether the hemodynamic core areas are necessary for word production, and bilingual language production.


Author(s):  
Karen Emmorey

Biology-based distinctions between sign and speech can be exploited to discover how the input–output systems of language impact online language processing and affect the neurocognitive underpinnings of language comprehension and production. This article explores which aspects of language processing appear to be universal to all human languages and which are affected by the particular characteristics of audition versus vision, or by the differing constraints on manual versus oral articulation. Neither sign language nor spoken language comes pre-segmented into words and sentences for the perceiver. In contrast to written language, sign and speech are both primary language systems, acquired during infancy and early childhood without formal instruction. This article discusses sign perception and visual processing, phonology in a language without sound, categorical perception in sign language, processing universals and modality effects in the mental lexicon, the time course of sign versus word recognition, tip-of-the-fingers, non-concatenative morphology, the unique role of space for signed languages, and speaking versus signing.


Author(s):  
Dominic W. Massaro ◽  
Alexandra Jesse

This article gives an overview of the main research questions and findings unique to audiovisual speech perception research, and discusses what general questions about speech perception and cognition the research in this field can answer. The influence of a second perceptual source in audiovisual speech perception compared to auditory speech perception immediately necessitates the question of how the information from the different perceptual sources is used to reach the best overall decision. The article explores how our understanding of speech benefits from having the speaker's face present, and how this benefit makes transparent the nature of speech perception and word recognition. Modern communication methods such as Voice over Internet Protocol find a wide acceptance, but people are reluctant to forfeit face-to-face communication. The article also considers the role of visual speech as a language-learning tool in multimodal training, information and information processing in audiovisual speech perception, lexicon and word recognition, facial information for speech perception, and theories of audiovisual speech perception.


Sign in / Sign up

Export Citation Format

Share Document