visual target
Recently Published Documents


TOTAL DOCUMENTS

563
(FIVE YEARS 98)

H-INDEX

51
(FIVE YEARS 4)

Perception ◽  
2021 ◽  
pp. 030100662110656
Author(s):  
John J.-J. Kim ◽  
Meaghan E. McManus ◽  
Laurence R. Harris

Here, we investigate how body orientation relative to gravity affects the perceived size of visual targets. When in virtual reality, participants judged the size of a visual target projected at simulated distances of between 2 and 10 m and compared it to a physical reference length held in their hands while they were standing or lying prone or supine. Participants needed to make the visual size of the target 5.4% larger when supine and 10.1% larger when prone, compared to when they were in an upright position to perceive that it matched the physical reference length. Needing to make the target larger when lying compared to when standing suggests some not mutually exclusive possibilities. It may be that while tilted participants perceived the targets as smaller than when they were upright. It may be that participants perceived the targets as being closer while tilted compared to when upright. It may also be that participants perceived the physical reference length as longer while tilted. Misperceiving objects as larger and/or closer when lying may provide a survival benefit while in such a vulnerable position.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Gerolamo Carboni ◽  
Thrishantha Nanayakkara ◽  
Atsushi Takagi ◽  
Etienne Burdet

AbstractWhile the nervous system can coordinate muscles’ activation to shape the mechanical interaction with the environment, it is unclear if and how the arm’s coactivation influences visuo-haptic perception and motion planning. Here we show that the nervous system can voluntarily coactivate muscles to improve the quality of the haptic percept. Subjects tracked a randomly moving visual target they were physically coupled to through a virtual elastic band, where the stiffness of the coupling increased with wrist coactivation. Subjects initially relied on vision alone to track the target, but with practice they learned to combine the visual and haptic percepts in a Bayesian manner to improve their tracking performance. This improvement cannot be explained by the stronger mechanical guidance from the elastic band. These results suggest that with practice the nervous system can learn to integrate a novel haptic percept with vision in an optimal fashion.


2021 ◽  
Author(s):  
Dominika Drążyk ◽  
Marcus Missal

Abstract Expected surprise could be defined as the anticipation of the uncertainty associated with the future occurrence of a target of interest. We hypothesized that spatial expected surprise could have a different impact on anticipatory and visual gaze orientation. This hypothesis was tested in humans using a saccadic reaction time task in which a cue indicated the future position of a stimulus. In the ‘no expected surprise’ condition, the visual target could appear only at the previously cued location. In other conditions, more likely future positions were cued with increasing expected surprise. Anticipation was more frequent and pupil size was larger in the no expected surprise condition compared with all other conditions. The latency of visually-guided saccades increased linearly with the logarithm of surprise but their maximum velocity did not.In conclusion, before stimulus appearance oculomotor responses were altered probably due to increased arousal in the no expected surprise condition. After stimulus appearance, the saccadic decision signal could be scaled logarithmically as a function of surprise (Hick’s law). However, maximum velocity also reflected increased arousal in the no surprise condition. Therefore, expected surprise alters the balance between anticipatory and visually-guided responses and differently affects movement kinematics and latency.


Author(s):  
Pierre-Michel Bernier ◽  
James Mathew ◽  
Frederic R. Danion

Adapting hand movements to changes in our body or the environment is essential for skilled motor behavior, as is the ability to flexibly combine experience gathered in separate contexts. However it has been shown that when adapting hand movements to two different visuomotor perturbations in succession, interference effects can occur. Here we investigate whether these interference effects compromise our ability to adapt to the superposition of the two perturbations. Participants tracked with a joystick a visual target that followed a smooth but unpredictable trajectory. Four separate groups of participants (total n = 83) completed one block of 50 trials under each of three mappings: one in which the cursor was rotated by 90° (ROTATION), one in which the cursor mimicked the behavior of a mass-spring system (SPRING), and one in which the SPRING and ROTATION mappings were superimposed (SPROT). The order of the blocks differed across groups. Although interference effects were found when switching between SPRING and ROTATION, participants who performed these blocks first performed better in SPROT than participants who had no prior experience with SPRING and ROTATION (i.e., composition). Moreover, participants who started with SPROT exhibited better performance under SPRING and ROTATION than participants who had no prior experience with each of these mappings (i.e., decomposition). Additional analyses confirmed that these effects resulted from components of learning that were specific to the rotational and spring perturbations. These results show that interference effects do not preclude the ability to compose/decompose various forms of visuomotor adaptation.


Author(s):  
Keiichiro Inagaki ◽  
Nobuhiko Wagatsuma ◽  
Sou Nobukawa

The incidence of human-error-related traffic collisions is markedly reduced among drivers who have few years of driving experience compared with those with little driving experience or fewer driving opportunities, even if they have a driver’s license. This study analyzes the effect of driving experience on the perception of the traffic scenes through electroencephalograms (EEGs). Primarily, we focused on visual attention during driving, the essential visual function in the visual search and human gaze, and evaluated the P300, which is involved in attention, to explore the effect of driving experience on the visual attention of traffic scenes, not for improving visual ability. In the results, the P300 response was observed in both experienced and beginner drivers when they paid visual attention to the visual target. Furthermore, the latency for the peak amplitude of the P300 response among experienced drivers was markedly faster than that in beginner drivers, suggesting that the P300 latency is a piece of crucial information for driving experience on visual attention.


2021 ◽  
Vol 21 (9) ◽  
pp. 2237
Author(s):  
Annalisa Bosco ◽  
Matteo Filippini ◽  
Patrizia Fattori

Author(s):  
Kevin Lieberman ◽  
Nadine Sarter

Breakdowns in human-robot teaming can result from trust miscalibration, i.e., a poor mapping of trust to a system’s capabilities, resulting in misuse or disuse of the technology. Trust miscalibration also negatively affects operators’ top-down attention allocation and monitoring of the system. This experiment assessed the efficacy of visual and auditory representations of a system’s confidence in its own abilities for supporting trust specificity, attention management and joint performance in the context of a UAV-supported target detection task. In contrast to earlier studies, neither visual nor auditory confidence information improved detection accuracy. Visual representations of confidence led to slower response times than auditory representations, likely due to resource competition with the visual target detection task. Finally, slower response times were observed when a UAV incorrectly detected a target. Results from this study can inform the design of visual and auditory representations of system confidence in human-machine teams with high attention demands.


Sign in / Sign up

Export Citation Format

Share Document