visible target
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 5)

H-INDEX

12
(FIVE YEARS 1)

Perception ◽  
2021 ◽  
pp. 030100662110487
Author(s):  
Emily M. Crowe ◽  
Martin Bossard ◽  
Harun Karimpur ◽  
Simon K. Rushton ◽  
Katja Fiehler ◽  
...  

Everyday movements are guided by objects’ positions relative to other items in the scene (allocentric information) as well as by objects’ positions relative to oneself (egocentric information). Allocentric information can guide movements to the remembered positions of hidden objects, but is it also used when the object remains visible? To stimulate the use of allocentric information, the position of the participant’s finger controlled the velocity of a cursor that they used to intercept moving targets, so there was no one-to-one mapping between egocentric positions of the hand and cursor. We evaluated whether participants relied on allocentric information by shifting all task-relevant items simultaneously leaving their allocentric relationships unchanged. If participants rely on allocentric information they should not respond to this perturbation. However, they did. They responded in accordance with their responses to each item shifting independently, supporting the idea that fast guidance of ongoing movements primarily relies on egocentric information.


Author(s):  
Madhur Mangalam ◽  
I-Chieh Lee ◽  
Karl M. Newell ◽  
Damian G. Kelty-Stephen

AbstractStanding still and focusing on a visible target in front of us is a preamble to many coordinated behaviors (e.g., reaching an object). Hiding behind its apparent simplicity is a deep layering of texture at many scales. The task of standing still laces together activities at multiple scales: from ensuring that a few photoreceptors on the retina cover the target in the visual field on an extremely fine scale to synergies spanning the limbs and joints at smaller scales to the mechanical layout of the ground underfoot and optic flow in the visual field on the coarser scales. Here, we used multiscale probability density function (PDF) analysis to show that postural fluctuations exhibit similar statistical signatures of cascade dynamics as found in fluid flow. In participants asked to stand quietly, the oculomotor strain of visually fixating at different distances moderated postural cascade dynamics. Visually fixating at a comfortable viewing distance elicited posture with a similar cascade dynamics as posture with eyes closed. Greater viewing distances known to stabilize posture showed more diminished cascade dynamics. In contrast, nearest and farthest viewing distances requiring greater oculomotor strain to focus on targets elicited a dramatic strengthening of postural cascade dynamics, reflecting active postural adjustments. Critically, these findings suggest that vision stabilizes posture by reconfiguring the prestressed poise that prepares the body to interact with different spatial layouts.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Martin Szinte ◽  
Michael Puntiroli ◽  
Heiner Deubel

Abstract When preparing a saccade, attentional resources are focused at the saccade target and its immediate vicinity. Here we show that this does not hold true when saccades are prepared toward a recently extinguished target. We obtained detailed maps of orientation sensitivity when participants prepared a saccade toward a target that either remained on the screen or disappeared before the eyes moved. We found that attention was mainly focused on the immediate surround of the visible target and spread to more peripheral locations as a function of the distance from the cue and the delay between the target’s disappearance and the saccade. Interestingly, this spread was not accompanied with a spread of the saccade endpoint. These results suggest that presaccadic attention and saccade programming are two distinct processes that can be dissociated as a function of their interaction with the spatial configuration of the visual scene.


2019 ◽  
Vol 1305 ◽  
pp. 012048
Author(s):  
C J Watson ◽  
A U Yeo ◽  
J R Supple ◽  
M Geso ◽  
T Kron ◽  
...  
Keyword(s):  

2019 ◽  
Author(s):  
Kaushik J Lakshminarasimhan ◽  
Eric Avila ◽  
Erin Neyhart ◽  
Gregory C DeAngelis ◽  
Xaq Pitkow ◽  
...  

SUMMARYTo take the best actions, we often need to maintain and update beliefs about variables that cannot be directly observed. To understand the principles underlying such belief updates, we need tools to uncover subjects’ belief dynamics from natural behaviour. We tested whether eye movements could be used to infer subjects’ beliefs about latent variables using a naturalistic, visuomotor navigation task. We observed eye movements that appeared to continuously track the goal location even when no visible target was present there. Accurate goal-tracking was associated with improved task performance, and inhibiting eye movements in humans impaired navigation precision. By using passive stimulus playback and manipulating stimulus reliability, we show that subjects’ eye movements are likely voluntary, rather than reflexive. These results suggest that gaze dynamics play a key role in action-selection during challenging visuomotor behaviours, and may possibly serve as a window into the subject’s dynamically evolving internal beliefs.


2017 ◽  
Vol 29 (2) ◽  
pp. 266-277 ◽  
Author(s):  
Natalie Biderman ◽  
Liad Mudrik

Is consciousness necessary for integration? Findings of seemingly high-level object-scene integration in the absence of awareness have challenged major theories in the field and attracted considerable scientific interest. Lately, one of these findings has been questioned because of a failure to replicate, yet the other finding was still uncontested. Here, we show that this latter finding—slowed-down performance on a visible target following a masked prime scene that includes an incongruent object—is also not reproducible. Using Bayesian statistics, we found evidence against unconscious integration of objects and scenes. Put differently, at the moment, there is no compelling evidence for object-scene congruency processing in the absence of awareness. Intriguingly, however, our results do suggest that consciously experienced yet briefly presented incongruent scenes take longer to process, even when subjects do not explicitly detect their incongruency.


2017 ◽  
Author(s):  
Charles A. Michelson ◽  
Jonathan W. Pillow ◽  
Eyal Seidemann

ABSTRACTWhile performing challenging perceptual tasks such as detecting a barely visible target, our perceptual reports vary across presentations of identical stimuli. This perceptual variability is presumably caused by neural variability in our brains. How much of the neural variability that correlates with the perceptual variability is present in the primary visual cortex (V1), the first cortical processing stage of visual information? To address this question, we recorded neural population responses from V1 using voltage-sensitive dye imaging while monkeys performed a challenging reaction-time visual detection task. We found that V1 responses in the period leading to the decision correspond more closely to the monkey’s report than to the visual stimulus. These results, together with a simple computational model that allows one to quantify the captured choice-related variability, suggest that most this variability is present in V1, and that areas outside of V1 contain relatively little independent choice-related variability.


2017 ◽  
Author(s):  
Michael Puntiroli ◽  
Heiner Deubel ◽  
Martin Szinte

SummaryWhen preparing a saccade, attentional resources are focused at the saccade target and its immediate vicinity. Here we show that this does not hold true when saccades are prepared towards a recently extinguished target. We obtained detailed maps of orientation sensitivity when participants prepared a saccade toward a target that either remained on the screen or disappeared before the eyes moved. We found that attention was mainly focused at the immediate surround of the visible target and increasingly spread to more peripheral locations as a function of the delay between the target’s disappearance and the saccade. Interestingly, this spread was accompanied by an overall increase in sensitivity, speaking against a dilution of limited resources over a larger spatial area. We hypothesize that these results reflect the behavioral consequences of the spatio-temporal dynamics of visual receptive fields in the presence and in the absence a structured visual cue.


2014 ◽  
Vol 369 (1641) ◽  
pp. 20130212 ◽  
Author(s):  
Simon van Gaal ◽  
Lionel Naccache ◽  
Julia D. I. Meuwese ◽  
Anouk M. van Loon ◽  
Alexandra H. Leighton ◽  
...  

What are the limits of unconscious language processing? Can language circuits process simple grammatical constructions unconsciously and integrate the meaning of several unseen words? Using behavioural priming and electroencephalography (EEG), we studied a specific rule-based linguistic operation traditionally thought to require conscious cognitive control: the negation of valence. In a masked priming paradigm, two masked words were successively (Experiment 1) or simultaneously presented (Experiment 2), a modifier (‘not’/‘very’) and an adjective (e.g. ‘good’/‘bad’), followed by a visible target noun (e.g. ‘peace’/‘murder’). Subjects indicated whether the target noun had a positive or negative valence. The combination of these three words could either be contextually consistent (e.g. ‘very bad - murder’) or inconsistent (e.g. ‘not bad - murder’). EEG recordings revealed that grammatical negations could unfold partly unconsciously, as reflected in similar occipito-parietal N400 effects for conscious and unconscious three-word sequences forming inconsistent combinations. However, only conscious word sequences elicited P600 effects, later in time. Overall, these results suggest that multiple unconscious words can be rapidly integrated and that an unconscious negation can automatically ‘flip the sign’ of an unconscious adjective. These findings not only extend the limits of subliminal combinatorial language processes, but also highlight how consciousness modulates the grammatical integration of multiple words.


Sign in / Sign up

Export Citation Format

Share Document