visual targets
Recently Published Documents


TOTAL DOCUMENTS

304
(FIVE YEARS 39)

H-INDEX

47
(FIVE YEARS 1)

Perception ◽  
2021 ◽  
pp. 030100662110656
Author(s):  
John J.-J. Kim ◽  
Meaghan E. McManus ◽  
Laurence R. Harris

Here, we investigate how body orientation relative to gravity affects the perceived size of visual targets. When in virtual reality, participants judged the size of a visual target projected at simulated distances of between 2 and 10 m and compared it to a physical reference length held in their hands while they were standing or lying prone or supine. Participants needed to make the visual size of the target 5.4% larger when supine and 10.1% larger when prone, compared to when they were in an upright position to perceive that it matched the physical reference length. Needing to make the target larger when lying compared to when standing suggests some not mutually exclusive possibilities. It may be that while tilted participants perceived the targets as smaller than when they were upright. It may be that participants perceived the targets as being closer while tilted compared to when upright. It may also be that participants perceived the physical reference length as longer while tilted. Misperceiving objects as larger and/or closer when lying may provide a survival benefit while in such a vulnerable position.


2021 ◽  
Vol 17 (12) ◽  
pp. e1009662
Author(s):  
Michael R. Traner ◽  
Ethan S. Bromberg-Martin ◽  
Ilya E. Monosov

Classic foraging theory predicts that humans and animals aim to gain maximum reward per unit time. However, in standard instrumental conditioning tasks individuals adopt an apparently suboptimal strategy: they respond slowly when the expected value is low. This reward-related bias is often explained as reduced motivation in response to low rewards. Here we present evidence this behavior is associated with a complementary increased motivation to search the environment for alternatives. We trained monkeys to search for reward-related visual targets in environments with different values. We found that the reward-related bias scaled with environment value, was consistent with persistent searching after the target was already found, and was associated with increased exploratory gaze to objects in the environment. A novel computational model of foraging suggests that this search strategy could be adaptive in naturalistic settings where both environments and the objects within them provide partial information about hidden, uncertain rewards.


2021 ◽  
Vol 49 (12) ◽  
pp. 1-11
Author(s):  
Cheng Kang ◽  
Nan Ye ◽  
Fangwen Zhang ◽  
Yanwen Wu ◽  
Guichun Jin ◽  
...  

Although studies have investigated the influence of the emotionality of primes on the cross-modal affective priming effect, it is unclear whether this effect is due to the contribution of the arousal or the valence of primes. We explored how the valence and arousal of primes influenced the cross-modal affective priming effect. In Experiment 1 we manipulated the valence of primes (positive and negative) that were matched by arousal. In Experiments 2 and 3 we manipulated the arousal of primes under the conditions of positive and negative valence, respectively. Affective words were used as auditory primes and affective faces were used as visual targets in a priming task. The results suggest that the valence of primes modulated the cross-modal affective priming effect but that the arousal of primes did not influence the priming effect. Only when the priming stimuli were positive did the cross-modal affective priming effect occur, but negative primes did not produce a priming effect. In addition, for positive but not negative primes, the arousal of primes facilitated the processing of subsequent targets. Our findings have great significance for understanding the interaction of different modal affective information.


2021 ◽  
Vol 21 (10) ◽  
pp. 7
Author(s):  
Atanas D. Stankov ◽  
Jonathan Touryan ◽  
Stephen Gordon ◽  
Anthony J. Ries ◽  
Jason Ki ◽  
...  

2021 ◽  
Vol 5 (7) ◽  
pp. 31
Author(s):  
Jussi Rantala ◽  
Jari Kangas ◽  
Olli Koskinen ◽  
Tomi Nukarinen ◽  
Roope Raisamo

Many virtual reality (VR) applications use teleport for locomotion. The non-continuous locomotion of teleport is suited for VR controllers and can minimize simulator sickness, but it can also reduce spatial awareness compared to continuous locomotion. Our aim was to create continuous, controller-based locomotion techniques that would support spatial awareness. We compared the new techniques, slider and grab, with teleport in a task where participants counted small visual targets in a VR environment. Task performance was assessed by asking participants to report how many visual targets they found. The results showed that slider and grab were significantly faster to use than teleport, and they did not cause significantly more simulator sickness than teleport. Moreover, the continuous techniques provided better spatial awareness than teleport.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0252943
Author(s):  
Matthieu Ischer ◽  
Géraldine Coppin ◽  
Axel De Marles ◽  
Myriam Essellier ◽  
Christelle Porcherot ◽  
...  

The extent to which a nasal whiff of scent can exogenously orient visual spatial attention remains poorly understood in humans. In a series of seven studies, we investigated the existence of an exogenous capture of visual spatial attention by purely trigeminal (i.e., CO2) and both olfactory and trigeminal stimuli (i.e., eucalyptol). We chose these stimuli because they activate the trigeminal system which can be considered as an alert system and are thus supposedly relevant for the individual, and thus prone to capture attention. We used them as lateralized cues in a variant of a visual spatial cueing paradigm. In valid trials, trigeminal cues and visual targets were presented on the same side whereas in invalid trials they were presented on opposite sides. To characterize the dynamics of the cross-modal attentional capture, we manipulated the interval between the onset of the trigeminal cues and the visual targets (from 580 to 1870 ms). Reaction times in trigeminal valid trials were shorter than all other trials, but only when this interval was around 680 or 1170 ms for CO2 and around 610 ms for eucalyptol. This result reflects that both pure trigeminal and olfactory-trigeminal stimuli can exogenously capture humans’ spatial visual attention. We discuss the importance of considering the dynamics of this cross-modal attentional capture.


Author(s):  
Giulia C. Salgari ◽  
Geoffrey F. Potts ◽  
Joseph Schmidt ◽  
Chi C. Chan ◽  
Christopher C. Spencer ◽  
...  

Author(s):  
Chris L. E. Paffen ◽  
Andre Sahakian ◽  
Marijn E. Struiksma ◽  
Stefan Van der Stigchel

AbstractOne of the most influential ideas within the domain of cognition is that of embodied cognition, in which the experienced world is the result of an interplay between an organism’s physiology, sensorimotor system, and its environment. An aspect of this idea is that linguistic information activates sensory representations automatically. For example, hearing the word ‘red’ would automatically activate sensory representations of this color. But does linguistic information prioritize access to awareness of congruent visual information? Here, we show that linguistic verbal cues accelerate matching visual targets into awareness by using a breaking continuous flash suppression paradigm. In a speeded reaction time task, observers heard spoken color labels (e.g., red) followed by colored targets that were either congruent (red), incongruent (green), or neutral (a neutral noncolor word) with respect to the labels. Importantly, and in contrast to previous studies investigating a similar question, the incidence of congruent trials was not higher than that of incongruent trials. Our results show that RTs were selectively shortened for congruent verbal–visual pairings, and that this shortening occurred over a wide range of cue–target intervals. We suggest that linguistic verbal information preactivates sensory representations, so that hearing the word ‘red’ preactivates (visual) sensory information internally.


2021 ◽  
Author(s):  
Noemie Vilallongue ◽  
Julia Schaeffer ◽  
Anne-Marie Hesse ◽  
Celine Delpech ◽  
Antoine Paccard ◽  
...  

Long-distance regeneration of the central nervous system (CNS) has been achieved from the eye to the brain through activation of neuronal molecular pathways or pharmacological approaches. Unexpectedly, most of the regenerative fibers display guidance defects, which prevents reinnervation and further functional recovery. Therefore, characterizing the mature neuronal environment is essential to understand the adult axonal guidance in order to complete the circuit reconstruction. To this end, we used mass spectrometry to characterize the proteomes of major nuclei of the adult visual system: suprachiasmatic nucleus (SCN), ventral and dorsal lateral geniculate nucleus (vLGN, dLGN) and superior colliculus (SC)), as well as the optic chiasm. These analyses revealed the presence of guidance molecules and guidance-associated factors in the adult visual targets. Moreover, by performing bilateral optic nerve crush, we showed that the expression of some proteins was significantly modulated by the injury in the visual targets, even in the ones most distal to the lesion site. On another hand, we found that the expression of guidance molecules was not modified upon injury. This implies that these molecules may possibly interfere with the reinnervation of the brain targets. Together, our results provides an extensive characterization of the molecular environment in intact and injured conditions. These findings open new ways to correct regenerating axon guidance notably by manipulating the expression of the corresponding guidance receptors in the nervous system.


Sign in / Sign up

Export Citation Format

Share Document