Eye Position Affects Audio — Visual Fusion in Darkness

Perception ◽  
10.1068/p5847 ◽  
2007 ◽  
Vol 36 (10) ◽  
pp. 1487-1496 ◽  
Author(s):  
David Hartnagel ◽  
Alain Bichot ◽  
Corinne Roumes

We investigated the frame of reference involved in audio – visual (AV) fusion over space. This multisensory phenomenon refers to the perception of unity resulting from visual and auditory stimuli despite their potential spatial disparity. The extent of this illusion depends on the eccentricity in azimuth of the bimodal stimulus (Godfroy et al, 2003 Perception32 1233–1245). In a previous study, conducted in a luminous environment, Roumes et al 2004 ( Perception33 Supplement, 142) have shown that variation of AV fusion is gaze-dependent. Here we examine the contribution of ego- or allocentric visual cues by conducting the experiment in total darkness. Auditory and visual stimuli were displayed in synchrony with various spatial disparities. Subjects had to judge their unity (‘fusion’ or ‘no fusion’). Results showed that AV fusion in darkness remains gaze-dependent despite the lack of any allocentric cues and confirmed the hypothesis that the reference frame of the bimodal space is neither head-centred nor eye-centred.


2002 ◽  
Vol 14 (1) ◽  
pp. 62-69 ◽  
Author(s):  
Francesca Frassinetti ◽  
Francesco Pavani ◽  
Elisabetta Làdavas

Cross-modal spatial integration between auditory and visual stimuli is a common phenomenon in space perception. The principles underlying such integration have been outlined by neurophysiological and behavioral studies in animals (Stein & Meredith, 1993), but little evidence exists proving that similar principles occur also in humans. In the present study, we explored such possibility in patients with visual neglect, namely, patients with visuospatial impairment. To test this hypothesis, neglect patients were required to detect brief flash of light presented in one of six spatial positions, either in a unimodal condition (i.e., only visual stimuli were presented) or in a cross-modal condition (i.e., a sound was presented simultaneously to the visual target, either at the same spatial position or a tone of the remaining five possible positions). The results showed an improvement of visual detection when visual and auditory stimuli were originating from the same position in space or at close spatial disparity (168). In contrast, no improvement was found when the spatial separation of visual and auditory stimuli was larger than 168. Moreover, the improvement was larger for visual positions that were more affected by the spatial impairment, i.e., the most peripheral positions in the left visual field (LVF). In conclusion, the results of the present study considerably extend our knowledge about the multisensory integration, by showing in humans the existence of an integrated visuoauditory system with functional properties similar to those found in animals.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Stefano Rozzi ◽  
Marco Bimbi ◽  
Alfonso Gravante ◽  
Luciano Simone ◽  
Leonardo Fogassi

AbstractThe ventral part of lateral prefrontal cortex (VLPF) of the monkey receives strong visual input, mainly from inferotemporal cortex. It has been shown that VLPF neurons can show visual responses during paradigms requiring to associate arbitrary visual cues to behavioral reactions. Further studies showed that there are also VLPF neurons responding to the presentation of specific visual stimuli, such as objects and faces. However, it is largely unknown whether VLPF neurons respond and differentiate between stimuli belonging to different categories, also in absence of a specific requirement to actively categorize or to exploit these stimuli for choosing a given behavior. The first aim of the present study is to evaluate and map the responses of neurons of a large sector of VLPF to a wide set of visual stimuli when monkeys simply observe them. Recent studies showed that visual responses to objects are also present in VLPF neurons coding action execution, when they are the target of the action. Thus, the second aim of the present study is to compare the visual responses of VLPF neurons when the same objects are simply observed or when they become the target of a grasping action. Our results indicate that: (1) part of VLPF visually responsive neurons respond specifically to one stimulus or to a small set of stimuli, but there is no indication of a “passive” categorical coding; (2) VLPF neuronal visual responses to objects are often modulated by the task conditions in which the object is observed, with the strongest response when the object is target of an action. These data indicate that VLPF performs an early passive description of several types of visual stimuli, that can then be used for organizing and planning behavior. This could explain the modulation of visual response both in associative learning and in natural behavior.



Animals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 2233
Author(s):  
Loïc Pougnault ◽  
Hugo Cousillas ◽  
Christine Heyraud ◽  
Ludwig Huber ◽  
Martine Hausberger ◽  
...  

Attention is defined as the ability to process selectively one aspect of the environment over others and is at the core of all cognitive processes such as learning, memorization, and categorization. Thus, evaluating and comparing attentional characteristics between individuals and according to situations is an important aspect of cognitive studies. Recent studies showed the interest of analyzing spontaneous attention in standardized situations, but data are still scarce, especially for songbirds. The present study adapted three tests of attention (towards visual non-social, visual social, and auditory stimuli) as tools for future comparative research in the European starling (Sturnus vulgaris), a species that is well known to present individual variations in social learning or engagement. Our results reveal that attentional characteristics (glances versus gazes) vary according to the stimulus broadcasted: more gazes towards unusual visual stimuli and species-specific auditory stimuli and more glances towards species-specific visual stimuli and hetero-specific auditory stimuli. This study revealing individual variations shows that these tests constitute a very useful and easy-to-use tool for evaluating spontaneous individual attentional characteristics and their modulation by a variety of factors. Our results also indicate that attentional skills are not a uniform concept and depend upon the modality and the stimulus type.



2021 ◽  
Author(s):  
Judith M. Varkevisser ◽  
Ralph Simon ◽  
Ezequiel Mendoza ◽  
Martin How ◽  
Idse van Hijlkema ◽  
...  

AbstractBird song and human speech are learned early in life and for both cases engagement with live social tutors generally leads to better learning outcomes than passive audio-only exposure. Real-world tutor–tutee relations are normally not uni- but multimodal and observations suggest that visual cues related to sound production might enhance vocal learning. We tested this hypothesis by pairing appropriate, colour-realistic, high frame-rate videos of a singing adult male zebra finch tutor with song playbacks and presenting these stimuli to juvenile zebra finches (Taeniopygia guttata). Juveniles exposed to song playbacks combined with video presentation of a singing bird approached the stimulus more often and spent more time close to it than juveniles exposed to audio playback only or audio playback combined with pixelated and time-reversed videos. However, higher engagement with the realistic audio–visual stimuli was not predictive of better song learning. Thus, although multimodality increased stimulus engagement and biologically relevant video content was more salient than colour and movement equivalent videos, the higher engagement with the realistic audio–visual stimuli did not lead to enhanced vocal learning. Whether the lack of three-dimensionality of a video tutor and/or the lack of meaningful social interaction make them less suitable for facilitating song learning than audio–visual exposure to a live tutor remains to be tested.



1954 ◽  
Vol 100 (419) ◽  
pp. 462-477 ◽  
Author(s):  
K. R. L. Hall ◽  
E. Stride

A number of studies on reaction time (R.T.) latency to visual and auditory stimuli in psychotic patients has been reported since the first investigations on the personal equation were carried out. The general trends from the work up to 1943 are well summarized by Hunt (1944), while Granger's (1953) review of “Personality and visual perception” contains a summary of the studies on R.T. to visual stimuli.





1990 ◽  
Vol 63 (3) ◽  
pp. 502-522 ◽  
Author(s):  
R. Lal ◽  
M. J. Friedlander

1. Extracellular recordings were made from single neurons in layer A of the left dorsal lateral geniculate nucleus (LGNd) of anesthetized and paralyzed adult cats. Responses to retinotopically identical visual stimuli (presented through the right eye) were recorded at several positions of the left eye in its orbit. Visual stimuli consisted of drifting sinusoidal gratings of optimal temporal and spatial frequencies at twice threshold contrast. Visual stimulation of the left eye was blocked by a variety of methods, including intravitreal injection of tetrodotoxin (TTX). The change in position of the left eye was achieved by passive movements in a randomized and interleaved fashion. Of 237 neurons studied, responses were obtained from 143 neurons on 20-100 trials of identical visual stimulation at each of six eye positions. Neurons were classified as X- or Y- on the basis of a standard battery of physiological tests (primarily linearity of spatial summation and response latency to electrical stimulation of the optic chiasm). 2. The effect of eye position on the visual response of the 143 neurons was analyzed with respect to the number of action potentials elicited and the peak firing rate. Fifty-seven (40%) neurons had a significant effect [by one-factor repeated-measure analysis of variance (ANOVA), P less than 0.05] of eye position on the visual response by either criterion (number of action potentials or peak firing rate). Of these 57 neurons, 47 had a significant effect (P less than 0.05) with respect to the number of action potentials and 23 had a significant effect (P less than 0.05) by both criteria. Thus the permissive measure by either criterion and the conservative measure by both criteria resulted in 40% and 16%, respectively, of all neurons' visual responses being significantly affected by eye position. 3. For the 47 neurons with a significant effect of eye position (number of action potentials criterion), a trend analysis of eye position versus visual response showed a linear trend (P less than 0.05) for 9 neurons, a quadratic trend (P less than 0.05) for 32 neurons, and no significant trend for the 6 remaining neurons. The trends were approximated with linear and nonlinear gain fields (range of eye position change over which the visual response was modulated). The gain fields of individual neurons were compared by measuring the normalized gain (change in neuronal response per degree change of eye position). The mean normalized gain for the 47 neurons was 4.3. 4. The nonlinear gain fields were generally symmetric with respect to nasal versus temporal changes in eye position.(ABSTRACT TRUNCATED AT 400 WORDS)



2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.



2020 ◽  
Vol 53 ◽  
pp. 126702 ◽  
Author(s):  
Li Deng ◽  
Hao Luo ◽  
Jun Ma ◽  
Zhuo Huang ◽  
Ling-Xia Sun ◽  
...  


1995 ◽  
Vol 7 (2) ◽  
pp. 182-195 ◽  
Author(s):  
Martha Flanders ◽  
John F. Soechting

In reaching and grasping movements, information about object location and object orientation is used to specify the appropriate proximal arm posture and the appropriate positions for the wrist and fingers. Since object orientation is ideally defined in a frame of reference fixed in space, this study tested whether the neural control of hand orientation is also best described as being in this spatial reference frame. With the proximal arm in various postures, human subjects used a handheld rod to approximate verbally defined spatial orientations. Subjects did quite well at indicating spatial vertical and spatial horizontal but made consistent errors in estimating 45° spatial slants. The errors were related to the proximal arm posture in a way that indicated that oblique hand orientations may be specified as a compromise between a reference frame fixed in space and a reference frame fixed to the arm. In another experiment, where subjects were explicitly requested to use a reference frame fixed to the arm, the performance was consistently biased toward a spatial reference frame. The results suggest that reaching and grasping movements may be implemented as an amalgam of two frames of reference, both neurally and behaviorally.



Sign in / Sign up

Export Citation Format

Share Document