scholarly journals No Language-Specific Activation during Linguistic Processing of Observed Actions

PLoS ONE ◽  
2007 ◽  
Vol 2 (9) ◽  
pp. e891 ◽  
Author(s):  
Ingo G. Meister ◽  
Marco Iacoboni
1978 ◽  
Vol 21 (4) ◽  
pp. 722-731 ◽  
Author(s):  
Lynn S. Bliss ◽  
Doris V. Allen ◽  
Georgia Walker

Educable and trainable mentally retarded children were administered a story completion task that elicits 14 grammatical structures. There were more correct responses from educable than from trainable mentally retarded children. Both groups found imperatives easiest, and future, embedded, and double-adjectival structures most difficult. The children classed as educable produced more correct responses than those termed trainable for declarative, question, and single-adjectival structures. The cognitive and linguistic processing of both groups is discussed as are the implications for language remediation.


2021 ◽  
pp. 1-26
Author(s):  
Jan-Louis Kruger ◽  
Natalia Wisniewska ◽  
Sixin Liao

Abstract High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers’ reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration).


Brain ◽  
1996 ◽  
Vol 119 (4) ◽  
pp. 1239-1247 ◽  
Author(s):  
J. R. Binder ◽  
J. A. Frost ◽  
T. A. Hammeke ◽  
S. M. Rao ◽  
R. W. Cox

2010 ◽  
Vol 115 (3) ◽  
pp. 162-181 ◽  
Author(s):  
Christine Weber-Fox ◽  
Laurence B. Leonard ◽  
Amanda Hampton Wray ◽  
J. Bruce Tomblin

2019 ◽  
Author(s):  
Gwendolyn L Rehrig ◽  
Candace Elise Peacock ◽  
Taylor Hayes ◽  
Fernanda Ferreira ◽  
John M. Henderson

The world is visually complex, yet we can efficiently describe it by extracting the information that is most relevant to convey. How do the properties of real-world scenes help us decide where to look and what to say? Image salience has been the dominant explanation for what drives visual attention and production as we describe displays, but new evidence shows scene meaning predicts attention better than image salience. Here we investigated the relevance of one aspect of meaning, graspability (the grasping interactions objects in the scene afford), given that affordances have been implicated in both visual and linguistic processing. We quantified image salience, meaning, and graspability for real-world scenes. In three eyetracking experiments, native English speakers described possible actions that could be carried out in a scene. We hypothesized that graspability would preferentially guide attention due to its task-relevance. In two experiments using stimuli from a previous study, meaning explained visual attention better than graspability or salience did, and graspability explained attention better than salience. In a third experiment we quantified image salience, meaning, graspability, and reach-weighted graspability for scenes that depicted reachable spaces containing graspable objects. Graspability and meaning explained attention equally well in the third experiment, and both explained attention better than salience. We conclude that speakers use object graspability to allocate attention to plan descriptions when scenes depict graspable objects within reach, and otherwise rely more on general meaning. The results shed light on what aspects of meaning guide attention during scene viewing in language production tasks.


2013 ◽  
Vol 5 (1) ◽  
pp. 70-86
Author(s):  
Raymond W. Gibbs, Jr.

Most everyone agrees that context is critical to the pragmatic interpretation of speakers’ utterances. But the enduring debate within cognitive science concerns when context has its influence in shaping people’s interpretations of what speakers imply by what they say. Some scholars maintain that context is only referred to after some initial linguistic analysis of an utterance has been performed, with other scholars arguing that context is present at all stages of immediate linguistic processing. Empirical research on this debate is, in my view, hopelessly deadlocked. My goal in this article is to advance a framework for thinking about the context for linguistic performance that conceives of human cognition and language use in terms of dynamical, self-organized processes. A self-organizational view of the context for linguistic performance demands that we acknowledge the multiple, interacting constraints which create, or soft-assemble, any specific moment of pragmatic experience. Pragmatic action and understanding is not producing or recovering a “meaning” but a continuously unfolding temporal process of the person adapting and orienting to the world. I discuss the implications of this view for the study of pragmatic meaning in discourse.


2016 ◽  
Vol 20 (4) ◽  
pp. 834-843 ◽  
Author(s):  
JENNIFER KRIZMAN ◽  
ANN R. BRADLOW ◽  
SILVIA SIU-YIN LAM ◽  
NINA KRAUS

Bilinguals are known to perform worse than monolinguals on speech-in-noise tests. However, the mechanisms underlying this difference are unclear. By varying the amount of linguistic information available in the target stimulus across five auditory-perception-in-noise tasks, we tested if differences in language-independent (sensory/cognitive) or language-dependent (extracting linguistic meaning) processing could account for this disadvantage. We hypothesized that language-dependent processing differences underlie the bilingual disadvantage and predicted that it would manifest on perception-in-noise tasks that use linguistic stimuli. We found that performance differences between bilinguals and monolinguals varied with the linguistic processing demands of each task: early, high-proficiency, Spanish–English bilingual adolescents performed worse than English monolingual adolescents when perceiving sentences, similarly when perceiving words, and better when perceiving tones in noise. This pattern suggests that bottlenecks in language-dependent processing underlie the bilingual disadvantage while language-independent perception-in-noise processes are enhanced.


Sign in / Sign up

Export Citation Format

Share Document