Timing Flickers across Sensory Modalities

Perception ◽  
10.1068/p6362 ◽  
2009 ◽  
Vol 38 (8) ◽  
pp. 1144-1151 ◽  
Author(s):  
Carmelo Mario Vicario ◽  
Gaetano Rappo ◽  
Anna Maria Pepi ◽  
Massimiliano Oliveri

In tasks requiring a comparison of the duration of a reference and a test visual cue, the spatial position of test cue is likely to be implicitly coded, providing a form of a congruency effect or introducing a response bias according to the environmental scale or its vectorial reference. The precise mechanism generating these perceptual shifts in subjective duration is not understood, although several studies suggest that spatial attentional factors may play a critical role. Here we use a duration comparison task within and across sensory modalities to examine if temporal performance is also modulated when people are exposed to spatial distractors involving different sensory modalities. Different groups of healthy participants performed duration comparison tasks in separate sessions: a time comparison task of visual stimuli during exposure to spatially presented auditory distractors; and a time comparison task of auditory stimuli during exposure to spatially presented visual distractors. We found the duration of visual stimuli biased depending on the spatial position of auditory distractors. Observers underestimated the duration of stimuli presented in the left spatial field, while there was an overestimation trend in estimating the duration of stimuli presented in the right spatial field. In contrast, timing of auditory stimuli was unaffected by exposure to visual distractors. These results support the existence of multisensory interactions between space and time showing that, in cross-modal paradigms, the presence of auditory distractors can modify visuo-temporal perception but not vice versa. This asymmetry is discussed in terms of sensory–perceptual differences between the two systems.

2010 ◽  
Vol 24 (1) ◽  
pp. 1-6 ◽  
Author(s):  
Oscar H. Hernández ◽  
Muriel Vogel-Sprott

A missing stimulus task requires an immediate response to the omission of a regular recurrent stimulus. The task evokes a subclass of event-related potential known as omitted stimulus potential (OSP), which reflects some cognitive processes such as expectancy. The behavioral response to a missing stimulus is referred to as omitted stimulus reaction time (RT). This total RT measure is known to include cognitive and motor components. The cognitive component (premotor RT) is measured by the time from the missing stimulus until the onset of motor action. The motor RT component is measured by the time from the onset of muscle action until the completion of the response. Previous research showed that RT is faster to auditory than to visual stimuli, and that the premotor of RT to a missing auditory stimulus is correlated with the duration of an OSP. Although this observation suggests that similar cognitive processes might underlie these two measures, no research has tested this possibility. If similar cognitive processes are involved in the premotor RT and OSP duration, these two measures should be correlated in visual and somatosensory modalities, and the premotor RT to missing auditory stimuli should be fastest. This hypothesis was tested in 17 young male volunteers who performed a missing stimulus task, who were presented with trains of auditory, visual, and somatosensory stimuli and the OSP and RT measures were recorded. The results showed that premotor RT and OSP duration were consistently related, and that both measures were shorter with respect to auditory stimuli than to visual or somatosensory stimuli. This provides the first evidence that the premotor RT is related to an attribute of the OSP in all three sensory modalities.


Animals ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 2233
Author(s):  
Loïc Pougnault ◽  
Hugo Cousillas ◽  
Christine Heyraud ◽  
Ludwig Huber ◽  
Martine Hausberger ◽  
...  

Attention is defined as the ability to process selectively one aspect of the environment over others and is at the core of all cognitive processes such as learning, memorization, and categorization. Thus, evaluating and comparing attentional characteristics between individuals and according to situations is an important aspect of cognitive studies. Recent studies showed the interest of analyzing spontaneous attention in standardized situations, but data are still scarce, especially for songbirds. The present study adapted three tests of attention (towards visual non-social, visual social, and auditory stimuli) as tools for future comparative research in the European starling (Sturnus vulgaris), a species that is well known to present individual variations in social learning or engagement. Our results reveal that attentional characteristics (glances versus gazes) vary according to the stimulus broadcasted: more gazes towards unusual visual stimuli and species-specific auditory stimuli and more glances towards species-specific visual stimuli and hetero-specific auditory stimuli. This study revealing individual variations shows that these tests constitute a very useful and easy-to-use tool for evaluating spontaneous individual attentional characteristics and their modulation by a variety of factors. Our results also indicate that attentional skills are not a uniform concept and depend upon the modality and the stimulus type.


1954 ◽  
Vol 100 (419) ◽  
pp. 462-477 ◽  
Author(s):  
K. R. L. Hall ◽  
E. Stride

A number of studies on reaction time (R.T.) latency to visual and auditory stimuli in psychotic patients has been reported since the first investigations on the personal equation were carried out. The general trends from the work up to 1943 are well summarized by Hunt (1944), while Granger's (1953) review of “Personality and visual perception” contains a summary of the studies on R.T. to visual stimuli.


2021 ◽  
Vol 11 (9) ◽  
pp. 1206
Author(s):  
Erika Almadori ◽  
Serena Mastroberardino ◽  
Fabiano Botta ◽  
Riccardo Brunetti ◽  
Juan Lupiáñez ◽  
...  

Object sounds can enhance the attentional selection and perceptual processing of semantically-related visual stimuli. However, it is currently unknown whether crossmodal semantic congruence also affects the post-perceptual stages of information processing, such as short-term memory (STM), and whether this effect is modulated by the object consistency with the background visual scene. In two experiments, participants viewed everyday visual scenes for 500 ms while listening to an object sound, which could either be semantically related to the object that served as the STM target at retrieval or not. This defined crossmodal semantically cued vs. uncued targets. The target was either in- or out-of-context with respect to the background visual scene. After a maintenance period of 2000 ms, the target was presented in isolation against a neutral background, in either the same or different spatial position as in the original scene. The participants judged the same vs. different position of the object and then provided a confidence judgment concerning the certainty of their response. The results revealed greater accuracy when judging the spatial position of targets paired with a semantically congruent object sound at encoding. This crossmodal facilitatory effect was modulated by whether the target object was in- or out-of-context with respect to the background scene, with out-of-context targets reducing the facilitatory effect of object sounds. Overall, these findings suggest that the presence of the object sound at encoding facilitated the selection and processing of the semantically related visual stimuli, but this effect depends on the semantic configuration of the visual scene.


2021 ◽  
Author(s):  
Chenyang Lin ◽  
Maggie Yeh ◽  
ladan shams

Human perception is inherently multisensory, with cross-modal integration playing a critical role in generating a coherent perceptual experience. To understand the causes of pleasurable experiences, we must understand whether and how the relationship between separate sensory modalities influences our experience of pleasure. We investigated the effect of congruency between vision and audition in the form of temporal alignment between the cuts in a video and the beats in an accompanying soundtrack. Despite the subliminal nature of the manipulation, a higher perceptual pleasure was found for temporal congruency compared with incongruency. These results suggest that the temporal aspect of the interaction between the visual and auditory modalities plays a critical role in shaping our perceptual pleasure, even when such interaction is not accessible to conscious awareness.


2018 ◽  
Vol 7 ◽  
pp. 172-177
Author(s):  
Łukasz Tyburcy ◽  
Małgorzata Plechawska-Wójcik

The paper describes results of comparison of reactions times to visual and auditory stimuli using EEG evoked potentials. Two experiments were used to applied. The first one explored reaction times to visual stimulus and the second one to auditory stimulus. After conducting an analysis of data, received results enable determining that visual stimuli evoke faster reactions than auditory stimuli.


2021 ◽  
Author(s):  
Akhil Dodda ◽  
Darsith Jayachandran ◽  
Shiva Subbulakshmi Radhakrishnan ◽  
Saptarshi Das

Abstract Natural intelligence has many dimensions, and in animals, learning about the environment and making behavioral changes are some of its manifestations. In primates vision plays a critical role in learning. The underlying biological neural networks contain specialized neurons and synapses which not only sense and process the visual stimuli but also learns and adapts, with remarkable energy efficiency. Forgetting also plays an active role in learning. Mimicking the adaptive neurobiological mechanisms for seeing, learning, and forgetting can, therefore, accelerate the development of artificial intelligence (AI) and bridge the massive energy gap that exists between AI and biological intelligence. Here we demonstrate a bio-inspired machine vision based on large area grown monolayer 2D phototransistor array integrated with analog, non-volatile, and programmable memory gate-stack that not only enables direct learning, and unsupervised relearning from the visual stimuli but also offers learning adaptability under photopic (bright-light), scotopic (low-light), as well as noisy illumination conditions at miniscule energy expenditure. In short, our “all-in-one” hardware vision platform combines “sensing”, “computing” and “storage” not only to overcome the von Neumann bottleneck of conventional complementary metal oxide semiconductor (CMOS) technology but also to eliminate the need for peripheral circuits and sensors.


Sign in / Sign up

Export Citation Format

Share Document