scholarly journals Testing temporal integration of feature probability distributions using role-reversal effects in visual search

2021 ◽  
Vol 188 ◽  
pp. 211-226
Author(s):  
Ömer Dağlar Tanrıkulu ◽  
Andrey Chetverikov ◽  
Árni Kristjánsson
2020 ◽  
Author(s):  
Ömer Dağlar Tanrıkulu ◽  
Andrey Chetverikov ◽  
Arni Kristjansson

The visual system is sensitive to statistical properties of complex scenes and can encode feature probability distributions in detail. This encoding could reflect a passive process due to the visual system’s sensitivity to temporal perturbations in the input or a more active process of building probabilistic representations. To investigate this, we examined how observers temporally integrate two different orientation distributions from sequentially presented visual search trials. If the encoded probabilistic information is used in a Bayesian optimal way, observers should weigh more reliable information more strongly, such as feature distributions with low variance. We therefore manipulated the variance of the two feature distributions. Participants performed sequential odd-one-out visual search for an oddly oriented line among distractors. During successive learning trials, the distractor orientations were sampled from two different Gaussian distributions on alternating trials. Then, observers performed a ‘test trial’ where the orientations of the target and distractors were switched, allowing to assess observer’s internal representation of distractor distributions based on changes in response times. In three experiments we observed that observer’s search times on test trials depended mainly on the very last learning trial, indicating little temporal integration. Since temporal integration has been previously observed with this method, we conclude that when the input is unreliable, the visual system relies on the most recent stimulus instead of integrating it with previous ones. This indicates that the visual system prefers to utilize sensory history when the statistical properties of the environment are relatively stable


2020 ◽  
Author(s):  
Sabrina Hansmann-Roth ◽  
Sóley Thorsteinsdóttir ◽  
Joy Geng ◽  
Arni Kristjansson

Humans are surprisingly good at learning the characteristics of their visual environment. Recent studies have revealed that not only can the visual system learn repeated features of visual search distractors, but their actual probability distributions. Search times were determined by the frequency of distractor features over consecutive search trials. Distractor distributions involve many exemplars on each trial, but whether observers can learn distributions where only a single exemplar from the distribution is presented on each trial is unknown. Here, we investigated potential learning of probability distributions of single targets during visual search. Over blocks of trials observers searched for an oddly-colored target that was drawn from either a Gaussian or uniform distribution. Not only was search influenced by the repetition of a target feature but more interestingly also by the probability of that feature within trial blocks. The same search targets, coming from the extremes of the two distributions were found significantly slower during the blocks where the distractors were drawn from a Gaussian distribution than from a uniform distribution indicating that observers were sensitive to the target probability determined by the distribution shape. In Experiment 2 we replicated the effect using binned distributions and revealed the limitations of target distribution encoding by using a more complex target distribution. Our results demonstrate detailed internal representations of target feature distributions and that the visual system integrates probability distributions of target colors over surprisingly long trial sequences.


2021 ◽  
Vol 21 (9) ◽  
pp. 1969
Author(s):  
Sabrina Hansmann-Roth ◽  
Sóley Thorsteinsdóttir ◽  
Joy Geng ◽  
Árni Kristjánsson

Author(s):  
Sabrina Hansmann-Roth ◽  
Sóley Þorsteinsdóttir ◽  
Joy J. Geng ◽  
Árni Kristjánsson

Author(s):  
Christopher Rorden ◽  
Arni Kristjansson ◽  
Kathleen Pirog Revill ◽  
Styrmir Saevarsson

1979 ◽  
Vol 31 (2) ◽  
pp. 287-304 ◽  
Author(s):  
Wolfgang Prinz

Two experiments studied how information about the nontarget items in a visual search task is used for the control of the search. The first experiment used the detection of “hurdle” stimuli to demonstrate that efficient memory representations of the context items can be established within each particular trial. This finding is explained by a model for the short-term integration of context information. The second experiment which varied the complexity of the local but not the global context provided some information about the nature of the integration operations involved. In its final version the model postulates two stages of processing with independent mechanisms of integration. Spatial integration at the first stage deletes repetitions within small samples. Temporal integration at the second stage stores and primes the memory representations of the context items over larger intervals. It is assumed that transient temporal integration within trials is mediated by the same mechanism that underlies permanent temporal integration between trials.


2000 ◽  
Vol 12 (8) ◽  
pp. 1839-1867 ◽  
Author(s):  
Pierre-Yves Burgi ◽  
Alan L. Yuille ◽  
Norberto M. Grzywacz

We develop a theory for the temporal integration of visual motion motivated by psychophysical experiments. The theory proposes that input data are temporally grouped and used to predict and estimate the motion flows in the image sequence. This temporal grouping can be considered a generalization of the data association techniques that engineers use to study motion sequences. Our temporal grouping theory is expressed in terms of the Bayesian generalization of standard Kalman filtering. To implement the theory, we derive a parallel network that shares some properties of cortical networks. Computer simulations of this network demonstrate that our theory qualitatively accounts for psychophysical experiments on motion occlusion and motion outliers. In deriving our theory, we assumed spatial factorizability of the probability distributions and made the approximation of updating the marginal distributions of velocity at each point. This allowed us to perform local computations and simplified our implementation. We argue that these approximations are suitable for the stimuli we are considering (for which spatial coherence effects are negligible).


Perception ◽  
1976 ◽  
Vol 5 (2) ◽  
pp. 225-231
Author(s):  
Robert T Solman

By increasing the number of display items and the physical similarity between the target and the irrelevant items it was possible to vary the difficulty of target selection in a visual-search task. The results showed that the accuracy with which the target was located declined as target selection became more difficult. On the other hand, estimates of the cumulative probability and the probability distributions of times necessary to form the icon indicated that these times were not influenced by changes in the difficulty of the task. The latter result supports Neisser's suggestion that the information processing carried out during the first stage of analysis can be attributed to the action of a distinct cognitive mechanism.


Perception ◽  
10.1068/p7469 ◽  
2013 ◽  
Vol 42 (4) ◽  
pp. 470-472 ◽  
Author(s):  
Jeremy Schwark ◽  
Igor Dolgov

Sign in / Sign up

Export Citation Format

Share Document