redundancy hypothesis
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 5)

H-INDEX

6
(FIVE YEARS 0)

2021 ◽  
Author(s):  
Rebecca Stein

Infants explore the world through many combinations of sight, sound, smell, taste and touch. A recent theory known as the “intersensory redundancy hypothesis” posits that the temporal overlap of stimulation across different sense modalities drives selective attention in infancy. Social communication typically involves visual, auditory and tactile cues for infants. Although infrequently studied, rhythmic touch is thought to be inherently rewarding; if manipulated within a social context, it may be able to reinforce joint attention. Given that joint attention is fundamental to the development of social communication, this study investigated the convergent effects of visual, auditory and tactile cues on the expression of joint attention in 10 infants between 11 to 12 months of age. The addition of synchronized (but not asynchronous) tactile stimulation to natural communication cues was associated with higher performance on a joint attention measure (i.e. more frequent responses to parental requests). Implications for autism are discussed.


2021 ◽  
Author(s):  
Rebecca Stein

Infants explore the world through many combinations of sight, sound, smell, taste and touch. A recent theory known as the “intersensory redundancy hypothesis” posits that the temporal overlap of stimulation across different sense modalities drives selective attention in infancy. Social communication typically involves visual, auditory and tactile cues for infants. Although infrequently studied, rhythmic touch is thought to be inherently rewarding; if manipulated within a social context, it may be able to reinforce joint attention. Given that joint attention is fundamental to the development of social communication, this study investigated the convergent effects of visual, auditory and tactile cues on the expression of joint attention in 10 infants between 11 to 12 months of age. The addition of synchronized (but not asynchronous) tactile stimulation to natural communication cues was associated with higher performance on a joint attention measure (i.e. more frequent responses to parental requests). Implications for autism are discussed.


2021 ◽  
Author(s):  
Catherine Davies ◽  
Anna Richardson

A range of studies investigating how overspecified referring expressions (e.g., the stripy cup to describe a single cup) affect referent identification have found it to slow identification, speed it up, or yield no effect on processing speed. To date, these studies have all used adjectives that are semantically arbitrary within the sentential context.In addition to the standard ‘informativeness’ design that manipulates the presence of contrast sets, we controlled the semantic relevance of adjectives in discourse to reveal whether overspecifying adjectives would affect processing when relevant to the context (fed the hungry rabbit) compared to when they are not (tickled the hungry rabbit). Using a self-paced reading paradigm with a sample of adult participants (N=31), we found that overspecified noun phrases were read more slowly than those that distinguished a member of a contrast set. Importantly, this penalty was mitigated when adjectives were semantically relevant.Contrary to classical approaches, we show that modifiers do not necessarily presuppose a set, and that referential and semantic information is integrated rapidly in pragmatic processing. Our data support Fukumura and van Gompel’s (2017) meaning-based redundancy hypothesis, which predicts that it is the specific semantic representation of the overspecifying adjective that determines whether a penalty is incurred, rather than generic Gricean expectations. We extend this account using a novel experimental design.


2020 ◽  
Author(s):  
Christine Cuskley ◽  
Joel Wallenberg

Over the past decade and a half, several lines of research have investigated aspects of the smooth signalling redundancy hypothesis. This hypothesis proposes that speakers distribute the information in linguistic utterances as evenly as possible, in order to make the utterance more robust against noise for the hearer. Several studies have shown evidence for this hypothesis in limited linguistic domains, showing that speakers manipulate acoustic and syntactic features to avoid drastic spikes or troughs in information content. In theory, the mechanism behind this is that these spikes would make utterances more vulnerable to noise events, and thus, communicative failure. However, this previous work doesn't consider information density across entire utterances, and only rarely has this mechanism been directly explored. Here, we introduce a new descriptive statistic that quantifies the uniformity of information across an entire utterance, alongside an algorithm that can measure the uniformity of actual utterances against an optimized distribution. Using a simple simulation, we show that utterances optimized for more uniform distributions of information are, in fact, more robust against noise.


2019 ◽  
Author(s):  
Benjamin Tucker ◽  
Michelle Sims ◽  
R. H. Baayen

The present paper investigates the influence of opposing lexical forces on speech production using the duration of the stem vowel of regular and irregular verbs as attested in the Buckeye corpus of conversational North-American English. We compared two sets of predictors, reflecting two different approaches to speechproduction, one based on competition between word forms, the other based on principles of discrimination learning. Classical measures in word form competition theories such as word frequency, lexical density, and gang size (types of vocalic alternation) were predictive of stem vowel duration. However, more precise predic-tions were obtained using measures derived from a two-layer network model trained on the Buckeye corpus. Measures representing strong bottom-up support predicted longer vowel durations. Conversely, measures reflecting uncertainty predicted shorter vowel durations, including a measure of the verb’s semantic density. The learning-based model also suggests that it is not a verb’s frequency as such that gives rise to shorter vowel duration, but rather a verb’s collocational diversity. Results are discussed with reference to the Smooth Signal Redundancy Hypothesis and the Paradigmatic Signal Enhancement Hypothesis.


2017 ◽  
Vol 59 (7) ◽  
pp. 910-915 ◽  
Author(s):  
Robert Lickliter ◽  
Lorraine E. Bahrick ◽  
Jimena Vaillant-Mekras

2017 ◽  
Author(s):  
Daniel C. Hyde ◽  
Ross Flom ◽  
Chris L. Porter

In this paper we describe behavioral and neurophysiological evidence for infants’ multimodal face-voice perception. We argue that the behavioral development of face-voice perception, like multimodal perception more broadly, is consistent with the intersensory redundancy hypothesis (IRH). Furthermore, we highlight that several recently observed features of the neural responses in infants converge with the behavioral predictions of the intersensory redundancy hypothesis. Finally, we discuss the potential benefits of combining brain and behavioral measures to study multisensory processing, as well as some applications of this work for atypical development.


2016 ◽  
Author(s):  
Monica Nordberg ◽  
Douglas M. Templeton ◽  
Ole Andersen ◽  
John H. Duffus

Sign in / Sign up

Export Citation Format

Share Document