visual stimulus
Recently Published Documents


TOTAL DOCUMENTS

854
(FIVE YEARS 173)

H-INDEX

56
(FIVE YEARS 5)

2022 ◽  
Author(s):  
Constantinos Eleftheriou

The goal of this protocol is to assess visuomotor learning and motor flexibility in freely-moving mice, using the Visiomode touchscreen platform. Water-restricted mice first learn to associate touching a visual stimulus on the screen with a water reward. They then learn to discriminate between different visual stimuli on the touchscreen by nose-poking, before asked to switch their motor strategy to forelimb reaching.


2021 ◽  
Author(s):  
Rachel Ege ◽  
A. John van Opstal ◽  
Marc Mathijs van Wanrooij

The ventriloquism aftereffect (VAE) describes the persistent shift of perceived sound location after having been adapted to a ventriloquism condition, in which the sound was repeatedly paired with a displaced visual stimulus. In the latter case, participants consistently mislocalize the sound in the direction of the visual stimulus (ventriloquism effect, VE). Previous studies provide conflicting reports regarding the strength of the VAE, ranging from 0 to nearly 100%. Moreover, there is controversy about its generalization to different sounds than the one inducing the VE, ranging from no transfer at all, to full transfer across different sound spectra. Here, we imposed the VE for three different sounds: a low-frequency and a high-frequency narrow-band noise, and a broadband Gaussian white noise (GWN). In the adaptation phase, listeners generated fast goal-directed head movements to localize the sound, presented across a 70 deg range in the horizontal plane, while ignoring a visual distracter that was consistently displaced 10 deg to the right of the sound. In the post-adaptation phase, participants localized narrow-band sounds with center frequencies from 0.5 to 8 kHz, as well as GWN, without the visual distracter. Our results show that the VAE amounted to approximately 40% of the VE and generalized well across the entire frequency domain. We also found that the strength of the VAE correlated with the pre-adaptation sound-localization performance. We compare our results with previous reports and discuss different hypotheses regarding optimal audio-visual cue integration.


PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0261063
Author(s):  
Sachiyo Ueda ◽  
Kazuya Nagamachi ◽  
Junya Nakamura ◽  
Maki Sugimoto ◽  
Masahiko Inami ◽  
...  

Visual perspective taking is inferring how the world looks to another person. To clarify this process, we investigated whether employing a humanoid avatar as the viewpoint would facilitate an imagined perspective shift in a virtual environment, and which factor of the avatar is effective for the facilitation effect. We used a task that involved reporting how an object looks by a simple direction judgment, either from the avatar’s position or from the position of an empty chair. We found that the humanoid avatar’s presence improved task performance. Furthermore, the avatar’s facilitation effect was observed only when the avatar was facing the visual stimulus to be judged; performance was worse when it faced backwards than when there was only an empty chair facing forwards. This suggests that the avatar does not simply attract spatial attention, but the posture of the avatar is crucial for the facilitation effect. In addition, when the directions of the head and the torso were opposite (i.e., an impossible posture), the avatar’s facilitation effect disappeared. Thus, visual perspective taking might not be facilitated by the avatar when its posture is biomechanically impossible because we cannot embody it. Finally, even when the avatar’s head of the possible posture was covered with a bucket, the facilitation effect was found with the forward-facing avatar rather than the backward-facing avatar. That is, the head/gaze direction cue, or presumably the belief that the visual stimulus to be judged can be seen by the avatar, was not required. These results suggest that explicit perspective taking is facilitated by embodiment towards humanoid avatars.


Author(s):  
Ian Christopher Calloway

Prior studies suggest that listeners are more likely to categorize a sibilant ranging acoustically from [∫] to [s] as /s/ if provided auditory or visual information about the speaker that suggests male gender. Social cognition can also be affected by experimentally induced differences in power. A powerful individual’s impression of another tends to show greater consistency with the other person’s broad social category, while a powerless individual’s impression is more consistent with the specific pieces of information provided about the other person. This study investigated whether sibilant categorization would be influenced by power when the listener is presented with inconsistent sources of information about speaker gender. Participants were experimentally primed for behavior consistent with powerful or powerless individuals. They then completed a forced choice identification task: They saw a visual stimulus (a male or female face) and categorized an auditory stimulus (ranging from ‘shy’ to ‘sigh’) as /∫/ or /s/. As expected, participants primed for high power were sensitive to a single cue to gender, while those who received the low power prime were sensitive to both, even if the cues did not match. This result suggests that variability in listener power may cause systematic differences in phonetic perception.


2021 ◽  
Author(s):  
Carlyn Patterson Gentile ◽  
Geoffrey K Aguirre ◽  
Kristy B. Arbogast ◽  
Christina L. Master

ABSTRACTIncreased sensitivity to light is common following concussion. Viewing a flickering light can also produce uncomfortable somatic sensations like nausea or headache. Here we examined effects evoked by viewing a patterned, flickering screen in a cohort of 81 uninjured youth athletes and 84 youth with concussion. We used exploratory factor analysis and identified two primary dimensions of variation: the presence or absence of visually evoked effects, and variation in the tendency to manifest effects that localized to the eyes (e.g., eye watering), versus more generalized neurologic symptoms (e.g., headache). Based on these two primary dimensions, we grouped participants into three categories of evoked symptomatology: no effects, eye-predominant effects, and brain-predominant effects. A similar proportion of participants reported eye-predominant effects in the uninjured (33.3%) and concussion (32.1%) groups. By contrast, participants who experienced brain-predominant effects were almost entirely from the concussion group (1.2% of uninjured, 35.7% of concussed). The presence of brain-predominant effects was associated with a higher concussion symptom burden and reduced performance on visio-vestibular tasks. Our findings indicate that the experience of negative constitutional, somatic sensations in response to a dynamic visual stimulus is a salient marker of concussion and is indicative of more severe concussion symptomatology. We speculate that differences in visually evoked effects reflect varying levels of activation of the trigeminal nociceptive system.


Author(s):  
Aleena R. Garner ◽  
Georg B. Keller

AbstractLearned associations between stimuli in different sensory modalities can shape the way we perceive these stimuli. However, it is not well understood how these interactions are mediated or at what level of the processing hierarchy they occur. Here we describe a neural mechanism by which an auditory input can shape visual representations of behaviorally relevant stimuli through direct interactions between auditory and visual cortices in mice. We show that the association of an auditory stimulus with a visual stimulus in a behaviorally relevant context leads to experience-dependent suppression of visual responses in primary visual cortex (V1). Auditory cortex axons carry a mixture of auditory and retinotopically matched visual input to V1, and optogenetic stimulation of these axons selectively suppresses V1 neurons that are responsive to the associated visual stimulus after, but not before, learning. Our results suggest that cross-modal associations can be communicated by long-range cortical connections and that, with learning, these cross-modal connections function to suppress responses to predictable input.


2021 ◽  
Author(s):  
Prasakti Tenri Fanyiwi ◽  
Beshoy Agayby ◽  
Ricardo Kienitz ◽  
Marcus Haag ◽  
Michael C. Schmid

AbstractA growing body of psychophysical research reports theta (3-8 Hz) rhythmic fluctuations in visual perception that are often attributed to an attentional sampling mechanism arising from theta rhythmic neural activity in mid- to high-level cortical association areas. However, it remains unclear to what extent such neuronal theta oscillations might already emerge at early sensory cortex like the primary visual cortex (V1), e.g. from the stimulus filter properties of neurons. To address this question, we recorded multi-unit neural activity from V1 of two macaque monkeys viewing a static visual stimulus with variable sizes, orientations and contrasts. We found that among the visually responsive electrode sites, more than 50 % showed a spectral peak at theta frequencies. Theta power varied with varying basic stimulus properties. Within each of these stimulus property domains (e.g. size), there was usually a single stimulus value that induced the strongest theta activity. In addition to these variations in theta power, the peak frequency of theta oscillations increased with increasing stimulus size and also changed depending on the stimulus position in the visual field. Further analysis confirmed that this neural theta rhythm was indeed stimulus-induced and did not arise from small fixational eye movements (microsaccades). When the monkeys performed a detection task of a target embedded in a theta-generating visual stimulus, reaction times also tended to fluctuate at the same theta frequency as the one observed in the neural activity. The present study shows that a highly stimulus-dependent neuronal theta oscillation can be elicited in V1 that appears to influence the temporal dynamics of visual perception.


2021 ◽  
Vol 31 (1) ◽  
Author(s):  
Eduard Isenmann ◽  
Moritz Schumann ◽  
Hannah L. Notbohm ◽  
Ulrich Flenker ◽  
Philipp Zimmer

Abstract Background Hormones like testosterone play a crucial role in performance enhancement and muscle growth. Therefore, various attempts to increase testosterone release and testosterone concentration have been made, especially in the context of resistance training. Among practitioners, sexual activity (coitus and masturbation) a few hours before training is often discussed to result in increases of testosterone concentration and thus promote muscle growth. However, there is no evidence to support this assumption and the kinetics of the testosterone and cortisol response after sexual activity have not been adequately investigated. Therefore, the aim of this pilot-study was to examine the kinetics of hormone concentrations of total testosterone, free testosterone and cortisol and their ratios after masturbation. In a three-arm single blinded cross-over study, the effects of masturbation with visual stimulus were compared to a visual stimulus without masturbation and the natural kinetics in healthy young men. Results The results showed a significant between-condition difference in free testosterone concentrations. Masturbation (p < 0.01) and a visual stimulus (p < 0.05) may seem to counteract the circadian drop of free testosterone concentrations over the day. However, no statistical change was observed in the ratios between total testosterone, free testosterone and cortisol. Conclusions It can be assumed that masturbation may have a potential effect on free testosterone concentrations but not on hormonal ratios. However, additional studies with larger sample sizes are needed to validate these findings.


2021 ◽  
Author(s):  
Kenta Uchida ◽  
Albert A. Burkle ◽  
Daniel T. Blumstein

Ecotourism promotes conservation efforts while also allowing for low impact observation of wildlife. Many ecotourists photograph wildlife and photography plays an important role in focusing the public’s attention on nature. Although photography is commonly believed to be a low impact activity, how the visual stimulus of a camera influences wildlife remains unknown. Since animals are known to fear eyes pointed towards them, we predicted that a camera with a large zoom lens would increase animal’s vigilance levels. Using yellow-bellied marmots (Marmota flaviventer) as a mammalian model, and adopting a behavioural approach to identify how marmots responded to cameras, we experimentally quantified vigilance and flight initiation distance towards humans when marmots were approached with and without a camera. While a camera was pointed at an individual, marmots allocated less time to searching predators and increased time to looking at the observer than they did without a camera. However, whether a camera was pointed at a marmot or not had no effect on the distance the marmot flushed. Our results indicated that cameras distracted marmots but did not influence subsequent risk assessment (i.e., flight initiation distance); marmots may be curious about cameras but were not threatened by them. Capturing animals’ attentions reduces searching for predators and may increase the vulnerability to predation. Therefore, regulating photography in locations where predation risk is high or vulnerable species ranges’ overlap with humans may be required to reduce photography’s impact on wildlife.


2021 ◽  
Author(s):  
Rozan Vroman ◽  
Lawrie S McKay

Recent advances in 2-photon calcium-imaging in awake mice have made it possible to study the effect of different behavioural states on cortical circuitry. Many studies assume that somatic activity can be used as a measure for neuronal output. We set out to test the validity of this assumption by comparing somatic activity with the pre-synaptic activity of VIP (Vasoactive intestinal peptide)- and SST (Somatostatin)-positive interneurons in layer 2/3 of the primary visual cortex (V1). We used mice expressing genetically encoded calcium indicators in VIP/SST-interneurons across the whole cell (VIP/SST:GCaMP6f) or confined to pre-synapses (VIP/SST:SyGCaMP5). Mice were exposed to a full-field visual stimulation protocol consisting of 60-second-long presentations of moving Gabor gratings (0.04 cpd, 2 Hz) alternated by 30 seconds of grey screen. During imaging, mice were placed on an air-suspended Styrofoam ball, allowing them to run voluntarily. We compared neural activity during three 4-second time-windows: Before visual stimulation (−4 to 0 sec), during the initial onset (1 to 5 sec) and at the end of the stimulation (56 to 60 sec.). These were further compared while the mice were stationary and while they were voluntarily locomoting. Unlike VIP-somas, VIP-pre-synapses showed strong suppressive responses to the visual stimulus. Furthermore, VIP-somas were positively correlated with locomotion, whereas in VIP-synapses we observed a split between positive and negative correlations. In addition, a similar but weaker distinction was found between SST-somas and pre-synapses. The excitatory effect of locomotion in VIP-somas increased over the course of the visual stimulus but this property was only shared with the positively correlated VIP-pre-synapses. The remaining negatively correlated pre-synapses showed no relation to the overall activity of the Soma. Our results suggest that when making statements about the involvement of interneurons in V1 layer 2/3 circuitry it is crucial to measure from synaptic terminals as well as from somas.


Sign in / Sign up

Export Citation Format

Share Document