unimodal condition
Recently Published Documents


TOTAL DOCUMENTS

2
(FIVE YEARS 0)

H-INDEX

2
(FIVE YEARS 0)

2015 ◽  
Vol 58 (6) ◽  
pp. 1805-1817 ◽  
Author(s):  
Kimberly G. Smith ◽  
Daniel Fogerty

PurposeThis study evaluated the extent to which partial spoken or written information facilitates sentence recognition under degraded unimodal and multimodal conditions.MethodTwenty young adults with typical hearing completed sentence recognition tasks in unimodal and multimodal conditions across 3 proportions of preservation. In the unimodal condition, performance was examined when only interrupted text or interrupted speech stimuli were available. In the multimodal condition, performance was examined when both interrupted text and interrupted speech stimuli were concurrently presented. Sentence recognition scores were obtained from simultaneous and delayed response conditions.ResultsSignificantly better performance was obtained for unimodal speech-only compared with text-only conditions across all proportions preserved. The multimodal condition revealed better performance when responses were delayed. During simultaneous responses, participants received equal benefit from speech information when the text was moderately and significantly degraded. The benefit from text in degraded auditory environments occurred only when speech was highly degraded.ConclusionsThe speech signal, compared with text, is robust against degradation likely due to its continuous, versus discrete, features. Allowing time for offline linguistic processing is beneficial for the recognition of partial sensory information in unimodal and multimodal conditions. Despite the perceptual differences between the 2 modalities, the results highlight the utility of multimodal speech + text signals.



2002 ◽  
Vol 14 (1) ◽  
pp. 62-69 ◽  
Author(s):  
Francesca Frassinetti ◽  
Francesco Pavani ◽  
Elisabetta Làdavas

Cross-modal spatial integration between auditory and visual stimuli is a common phenomenon in space perception. The principles underlying such integration have been outlined by neurophysiological and behavioral studies in animals (Stein & Meredith, 1993), but little evidence exists proving that similar principles occur also in humans. In the present study, we explored such possibility in patients with visual neglect, namely, patients with visuospatial impairment. To test this hypothesis, neglect patients were required to detect brief flash of light presented in one of six spatial positions, either in a unimodal condition (i.e., only visual stimuli were presented) or in a cross-modal condition (i.e., a sound was presented simultaneously to the visual target, either at the same spatial position or a tone of the remaining five possible positions). The results showed an improvement of visual detection when visual and auditory stimuli were originating from the same position in space or at close spatial disparity (168). In contrast, no improvement was found when the spatial separation of visual and auditory stimuli was larger than 168. Moreover, the improvement was larger for visual positions that were more affected by the spatial impairment, i.e., the most peripheral positions in the left visual field (LVF). In conclusion, the results of the present study considerably extend our knowledge about the multisensory integration, by showing in humans the existence of an integrated visuoauditory system with functional properties similar to those found in animals.



Sign in / Sign up

Export Citation Format

Share Document