scholarly journals Information integration in modulation of pragmatic inferences during online language comprehension

2017 ◽  
Author(s):  
Rachel Anna Ryskin ◽  
chigusa kurumada ◽  
Sarah Brown-Schmidt

Upon hearing a scalar adjective in a definite referring expression such as “the big…,” listeners typically make anticipatory eye movements to an item in a contrast set, such as a big glass in the context of a smaller glass. Recent studies have suggested that this rapid, contrastive interpretation of scalar adjectives is malleable and calibrated to the speaker’s pragmatic competence. In a series of eye tracking experiments, we explore the nature of the evidence necessary for the modulation of pragmatic inferences in language comprehension, focusing on the complementary roles of top-down information–knowledge about the particular speaker’s pragmatic competence—and bottom-up cues—distributional information about the use of scalar adjectives in the environment. We find that bottom-up evidence alone (e.g., the speaker says “the big dog” in a context with one dog), in large quantities, can be sufficient to trigger modulation of the listener’s contrastive inferences, with or without top-down cues to support this adaptation. Further, these findings suggest that listeners track and flexibly combine multiple sources of information in service of efficient pragmatic communication.

Author(s):  
Martin V. Butz ◽  
Esther F. Kutter

While bottom-up visual processing is important, the brain integrates this information with top-down, generative expectations from very early on in the visual processing hierarchy. Indeed, our brain should not be viewed as a classification system, but rather as a generative system, which perceives something by integrating sensory evidence with the available, learned, predictive knowledge about that thing. The involved generative models continuously produce expectations over time, across space, and from abstracted encodings to more concrete encodings. Bayesian information processing is the key to understand how information integration must work computationally – at least in approximation – also in the brain. Bayesian networks in the form of graphical models allow the modularization of information and the factorization of interactions, which can strongly improve the efficiency of generative models. The resulting generative models essentially produce state estimations in the form of probability densities, which are very well-suited to integrate multiple sources of information, including top-down and bottom-up ones. A hierarchical neural visual processing architecture illustrates this point even further. Finally, some well-known visual illusions are shown and the perceptions are explained by means of generative, information integrating, perceptual processes, which in all cases combine top-down prior knowledge and expectations about objects and environments with the available, bottom-up visual information.


2012 ◽  
Vol 4 (1) ◽  
pp. 17-41 ◽  
Author(s):  
Anna K. Kuhlen ◽  
Alexia Galati ◽  
Susan E. Brennano

AbstractSpeakers adapt their speech based on both prior expectations and incoming cues about their addressees' informational needs (Kuhlen and Brennan 2010). Here, we investigate whether top-down information, such as speakers' expectations about addressees' attentiveness, and bottom-up cues, such as addressees' feedback during conversation, also influence speakers' gestures. In 39 dyads, addressees were either attentive when speakers told a joke or else distracted by a second task, while speakers expected addressees to be either attentive or distracted. Independently of adjustments in speech, both speakers' expectations and addressees' feedback shaped quantitative and qualitative aspects of gesturing. Speakers gestured more frequently when their prior expectations matched addressees' actual behavior. Moreover, speakers with attentive addressees gestured more in the periphery of gesture space when they expected addressees to be attentive. These systematic adjustments in gesturing suggest that speakers flexibly adapt to their addressees by integrating bottom-up cues available during the interaction in light of attributions made from top-down expectations. That these sources of information lead to adjustments patterning similarly in speech and gesture informs theoretical frameworks of how different modalities are deployed and coordinated in dialogue.


1988 ◽  
Vol 32 (19) ◽  
pp. 1335-1339 ◽  
Author(s):  
Christopher D. Wickens ◽  
Anthony D. Andre

Object displays have been proposed as an efficient, economical means for presenting multiple sources of information that must be integrated. In this paper, we outline the fundamental theoretical and applied principles that have been cited to justify object display advantages, and suggest some modifications to those principles. In particular, we describe the compatibility of proximity principle which asserts that object displays will facilitate information integration, but disrupt focused attention on the individual dimensions of the object. We then discriminate between homogeneous and heterogeneous feature objects, suggesting that only the former will produce emergent features that can facilitate information integration. Finally, we describe an experiment in which the object display is designed to incorporate an emergent feature that will support the perception of aircraft stall conditions. Evaluation of the display reveals superior integration performance to a separate bar graph display, but degraded focused attention performance, thus illustrating the principle of proximity of compatibility.


PsycCRITIQUES ◽  
2005 ◽  
Vol 50 (19) ◽  
Author(s):  
Michael Cole
Keyword(s):  
Top Down ◽  

Sign in / Sign up

Export Citation Format

Share Document