When vision is not an option: Development of haptic–auditory integration

2012 ◽  
Vol 25 (0) ◽  
pp. 205
Author(s):  
Karin Petrini ◽  
Alicia Remark ◽  
Louise Smith ◽  
Marko Nardini

To perform everyday tasks, such as crossing a road, we greatly rely on our sight. However, certain situations (e.g., an extremely dark environment) as well as visual impairments can either reduce the reliability of or completely remove this sensory information. In these cases, the use of other information is vital. Here we seek to examine the development of haptic and auditory integration. Three different groups of adults and 5- to 12-year-old children were asked to judge which of a standard sized and a variably sized ball was the largest. One group performed the task with auditory information only, haptic only or both. Auditory information about object size came from the loudness of a naturalistic sound played when observers knocked the ball against a touch-pad. A second group performed the same conditions, while wearing a thick glove to reduce the reliability of the haptic information. Finally, a third group performed the task with either congruent or incongruent information. Psychometric functions were fitted to responses in order to measure observers’ sensitivities to object size under these different conditions. Integration of haptic and auditory information predicts greater sensitivity in the bimodal condition than in either single-modality condition. Initial results show that young children do not integrate information from haptic and auditory modalities, with some children aged below 8 years performing worse in the bimodal condition than in the auditory-only condition. Older children and adults seem able to integrate auditory and haptic information, especially when the reliability of the haptic information is reduced.

2006 ◽  
Vol 1 (1) ◽  
pp. 73-81
Author(s):  
Wellington R. G. de Carvalho ◽  
Ana M. Pellegrini

The present study examined the stability of rope jumping skill measured by relative phase under different available sensory information. Nine male and nine female university students were required to perform a sequence of rope jumping at different pacing frequencies (1.4, 1.6, and 1.8 Hz) and in two different conditions: a) rope was turned by the performer itself (haptic information available), and b) rope was turned by others (visual and auditory information available). Passive marks were fixed on the rope and on the hip, knee, and ankle joint for analysis of the dependent variables: height of the rope, height of the jump and discrete relative phase. Overall, the results suggested that the motor pattern for jumping the rope is more stable when the performer herself/himself turns the hope and consequently is able to use haptic information in order to control the motor action as opposed to when only visual and auditory information are available. 


2009 ◽  
Vol 26 (5) ◽  
pp. 415-425 ◽  
Author(s):  
Janeen D. Loehr ◽  
Caroline Palmer

THE CURRENT STUDY EXAMINED HOW AUDITORY AND kinematic information influenced pianists' ability to synchronize musical sequences with a metronome. Pianists performed melodies in which quarter-note beats were subdivided by intervening eighth notes that resulted from auditory information (heard tones), motor production (produced tones), both, or neither. Temporal accuracy of performance was compared with finger trajectories recorded with motion capture. Asynchronies were larger when motor or auditory sensory information occurred between beats; auditory information yielded the largest asynchronies. Pianists were sensitive to the timing of the sensory information; information that occurred earlier relative to the midpoint between metronome beats was associated with larger asynchronies on the following beat. Finger motion was influenced only by motor production between beats and indicated the influence of other fingers' motion. These findings demonstrate that synchronization accuracy in music performance is influenced by both the timing and modality of sensory information that occurs between beats.


2019 ◽  
Author(s):  
Yoonsun Yang ◽  
Joonyeol Lee ◽  
Gunsoo Kim

AbstractThe inferior colliculus (IC) is the major midbrain auditory integration center, where virtually all ascending auditory inputs converge. Although the IC has been extensively studied for sound processing, little is known about the neural activity of the IC in moving subjects, as frequently happens in natural hearing conditions. Here we show, by recording the IC neural activity in walking mice, the activity of IC neurons is strongly modulated by locomotion in the absence of sound stimulus presentation. Similar modulation was also found in deafened mice, demonstrating that IC neurons receive non-auditory, locomotion-related neural signals. Sound-evoked activity was attenuated during locomotion, and the attenuation increased frequency selectivity across the population, while maintaining preferred frequencies. Our results suggest that during behavior, integrating movement-related and auditory information is an essential aspect of sound processing in the IC.


2017 ◽  
Author(s):  
Frank Papenmeier ◽  
Annika Maurer ◽  
Markus Huff

Background. Human observers segment dynamic information into discrete events. That is, although there is continuous sensory information, comprehenders perceive boundaries between two meaningful units of information. In narrative comprehension comprehenders use linguistic, non-linguistic, and physical cues for this event boundary perception. Yet, it is an open question–both from a theoretical and an empirical perspective–how linguistic and non-linguistic cues contribute to this process. The current study explores how linguistic cues contribute to participants’ ability to segment continuous auditory information into discrete, hierarchically structured events. Methods. Native speakers of German and non-native speakers, who neither spoke nor understood German, segmented a German audio drama into coarse and fine events. Whereas native participants could make use of linguistic, non-linguistic, and physical cues for segmentation, non-native participants could only use non-linguistic and physical cues. We analyzed segmentation behavior in terms of the ability to identify coarse and fine event boundaries and the resulting hierarchical structure. Results. Non-native listeners identified essentially the same coarse event boundaries as native listeners but missed some of the fine event boundaries identified by the native listeners. Interestingly, hierarchical event perception (as measured with hierarchical alignment and enclosure) was comparable for native and non-native participants. Discussion. In summary, linguistic cues contributed particularly to the identification of certain fine event boundaries. The results are discussed with regard to the current theories of event cognition.


Author(s):  
Adam F. Werner ◽  
Jamie C. Gorman ◽  
Michael J. Crites

Due to lack of visual or auditory perceptual information, many tasks require interpersonal coordination and teaming. Dyadic verbal and/or auditory communication typically results in the two people becoming informationally coupled. This experiment examined coupling by using a two-person remote navigation task where one participant blindly drove a remote-controlled car while another participant provided auditory, visual, or a combination of both cues (bimodal). Under these conditions, we evaluated performance using easy, moderate, and hard task difficulties. We predicted that the visual condition would have higher performance measures overall, and the bimodal condition would have higher performance as difficulty increased. Results indicated that visual coupling performs better overall compared to auditory coupling and that bimodal coupling showed increased performance as task difficulty went from moderate to hard. When auditory coupling occurs, the frequency at which teams communicate affects performance— the faster teams spoke, the better they performed, even with visual communication available.


2000 ◽  
Vol 278 (1) ◽  
pp. G6-G9 ◽  
Author(s):  
Donald B. Katz ◽  
Miguel A. L. Nicolelis ◽  
S. A. Simon

The tongue is the principal organ that provides sensory information about the quality and quantity of chemicals in food. Other information about the temperature and texture of food is also transduced on the tongue, via extragemmal receptors that form branches of the trigeminal, glossopharyngeal, and vagal nerves. These systems, together with information from the gastrointestinal (GI) system, interact to determine whether or not food is palatable. In this themes article, emphasis is placed on the integrative aspects of gustatory processing by showing the convergence of gustatory information with somatosensory, nociceptive, and visceral information (from the GI system) on the tongue and in the brain. Our thesis is that gustation should be thought of as an integral part of a distributed, interacting multimodal system in which information from other systems, including the GI system, can modulate the taste of food.


2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.


2015 ◽  
Vol 112 (44) ◽  
pp. 13525-13530 ◽  
Author(s):  
Ben M. Harvey ◽  
Alessio Fracasso ◽  
Natalia Petridou ◽  
Serge O. Dumoulin

Humans and many animals analyze sensory information to estimate quantities that guide behavior and decisions. These quantities include numerosity (object number) and object size. Having recently demonstrated topographic maps of numerosity, we ask whether the brain also contains maps of object size. Using ultra-high-field (7T) functional MRI and population receptive field modeling, we describe tuned responses to visual object size in bilateral human posterior parietal cortex. Tuning follows linear Gaussian functions and shows surround suppression, and tuning width narrows with increasing preferred object size. Object size-tuned responses are organized in bilateral topographic maps, with similar cortical extents responding to large and small objects. These properties of object size tuning and map organization all differ from the numerosity representation, suggesting that object size and numerosity tuning result from distinct mechanisms. However, their maps largely overlap and object size preferences correlate with numerosity preferences, suggesting associated representations of these two quantities. Object size preferences here show no discernable relation to visual position preferences found in visuospatial receptive fields. As such, object size maps (much like numerosity maps) do not reflect sensory organ structure but instead emerge within the brain. We speculate that, as in sensory processing, optimization of cognitive processing using topographic maps may be a common organizing principle in association cortex. Interactions between object size and numerosity maps may associate cognitive representations of these related features, potentially allowing consideration of both quantities together when making decisions.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0248084
Author(s):  
Vonne van Polanen

When grasping an object, the opening between the fingertips (grip aperture) scales with the size of the object. If an object changes in size, the grip aperture has to be corrected. In this study, it was investigated whether such corrections would influence the perceived size of objects. The grasping plan was manipulated with a preview of the object, after which participants initiated their reaching movement without vision. In a minority of the grasps, the object changed in size after the preview and participants had to adjust their grasping movement. Visual feedback was manipulated in two experiments. In experiment 1, vision was restored during reach and both visual and haptic information was available to correct the grasp and lift the object. In experiment 2, no visual information was provided during the movement and grasps could only be corrected using haptic information. Participants made reach-to-grasp movements towards two objects and compared these in size. Results showed that participants adjusted their grasp to a change in object size from preview to grasped object in both experiments. However, a change in object size did not bias the perception of object size or alter discrimination performance. In experiment 2, a small perceptual bias was found when objects changed from large to small. However, this bias was much smaller than the difference that could be discriminated and could not be considered meaningful. Therefore, it can be concluded that the planning and execution of reach-to-grasp movements do not reliably affect the perception of object size.


Author(s):  
Knut Drewing ◽  
Alexandra Lezkan

AbstractHaptic texture perception is based on sensory information sequentially gathered during several lateral movements (“strokes”). In this process, sensory information of earlier strokes must be preserved in a memory system. We investigated whether this system may be a haptic sensory memory. In the first experiment, participants performed three strokes across each of two textures in a frequency discrimination task. Between the strokes over the first texture, participants explored an intermediate area, which presented either a mask (high-energy tactile pattern) or minimal stimulation (low-energy smooth surface). Perceptual precision was significantly lower with the mask compared with a three-strokes control condition without an intermediate area, approaching performance in a one-stroke-control condition. In contrast, precision in the minimal stimulation condition was significantly better than in the one-stroke control condition and similar to the three-strokes control condition. In a second experiment, we varied the number of strokes across the first stimulus (one, three, five, or seven strokes) and either presented no masking or repeated masking after each stroke. Again, masking between the strokes decreased perceptual precision relative to the control conditions without masking. Precision effects of masking over different numbers of strokes were fit by a proven model on haptic serial integration (Lezkan & Drewing, Attention, Perception, & Psychophysics 80(1): 177–192, 2018b) that modeled masking by repeated disturbances in the ongoing integration. Taken together, results suggest that masking impedes the processes of haptic information preservation and integration. We conclude that a haptic sensory memory, which is comparable to iconic memory in vision, is used for integrating sequentially gathered sensory information.


Sign in / Sign up

Export Citation Format

Share Document