feature probability
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 6)

H-INDEX

2
(FIVE YEARS 1)

Author(s):  
Sabrina Hansmann-Roth ◽  
Sóley Þorsteinsdóttir ◽  
Joy J. Geng ◽  
Árni Kristjánsson

2021 ◽  
Vol 21 (9) ◽  
pp. 1969
Author(s):  
Sabrina Hansmann-Roth ◽  
Sóley Thorsteinsdóttir ◽  
Joy Geng ◽  
Árni Kristjánsson

2020 ◽  
Author(s):  
Sabrina Hansmann-Roth ◽  
Sóley Thorsteinsdóttir ◽  
Joy Geng ◽  
Arni Kristjansson

Humans are surprisingly good at learning the characteristics of their visual environment. Recent studies have revealed that not only can the visual system learn repeated features of visual search distractors, but their actual probability distributions. Search times were determined by the frequency of distractor features over consecutive search trials. Distractor distributions involve many exemplars on each trial, but whether observers can learn distributions where only a single exemplar from the distribution is presented on each trial is unknown. Here, we investigated potential learning of probability distributions of single targets during visual search. Over blocks of trials observers searched for an oddly-colored target that was drawn from either a Gaussian or uniform distribution. Not only was search influenced by the repetition of a target feature but more interestingly also by the probability of that feature within trial blocks. The same search targets, coming from the extremes of the two distributions were found significantly slower during the blocks where the distractors were drawn from a Gaussian distribution than from a uniform distribution indicating that observers were sensitive to the target probability determined by the distribution shape. In Experiment 2 we replicated the effect using binned distributions and revealed the limitations of target distribution encoding by using a more complex target distribution. Our results demonstrate detailed internal representations of target feature distributions and that the visual system integrates probability distributions of target colors over surprisingly long trial sequences.


2020 ◽  
Author(s):  
Ömer Dağlar Tanrıkulu ◽  
Andrey Chetverikov ◽  
Arni Kristjansson

The visual system is sensitive to statistical properties of complex scenes and can encode feature probability distributions in detail. This encoding could reflect a passive process due to the visual system’s sensitivity to temporal perturbations in the input or a more active process of building probabilistic representations. To investigate this, we examined how observers temporally integrate two different orientation distributions from sequentially presented visual search trials. If the encoded probabilistic information is used in a Bayesian optimal way, observers should weigh more reliable information more strongly, such as feature distributions with low variance. We therefore manipulated the variance of the two feature distributions. Participants performed sequential odd-one-out visual search for an oddly oriented line among distractors. During successive learning trials, the distractor orientations were sampled from two different Gaussian distributions on alternating trials. Then, observers performed a ‘test trial’ where the orientations of the target and distractors were switched, allowing to assess observer’s internal representation of distractor distributions based on changes in response times. In three experiments we observed that observer’s search times on test trials depended mainly on the very last learning trial, indicating little temporal integration. Since temporal integration has been previously observed with this method, we conclude that when the input is unreliable, the visual system relies on the most recent stimulus instead of integrating it with previous ones. This indicates that the visual system prefers to utilize sensory history when the statistical properties of the environment are relatively stable


Sensors ◽  
2019 ◽  
Vol 19 (17) ◽  
pp. 3728 ◽  
Author(s):  
Zhou ◽  
Wang ◽  
Sun ◽  
Sun

Text representation is one of the key tasks in the field of natural language processing (NLP). Traditional feature extraction and weighting methods often use the bag-of-words (BoW) model, which may lead to a lack of semantic information as well as the problems of high dimensionality and high sparsity. At present, to solve these problems, a popular idea is to utilize deep learning methods. In this paper, feature weighting, word embedding, and topic models are combined to propose an unsupervised text representation method named the feature, probability, and word embedding method. The main idea is to use the word embedding technology Word2Vec to obtain the word vector, and then combine this with the feature weighted TF-IDF and the topic model LDA. Compared with traditional feature engineering, the proposed method not only increases the expressive ability of the vector space model, but also reduces the dimensions of the document vector. Besides this, it can be used to solve the problems of the insufficient information, high dimensions, and high sparsity of BoW. We use the proposed method for the task of text categorization and verify the validity of the method.


2018 ◽  
Vol 28 (3) ◽  
pp. 1423-1448 ◽  
Author(s):  
Marco Battiston ◽  
Stefano Favaro ◽  
Daniel M. Roy ◽  
Yee Whye Teh

Perception ◽  
10.1068/p7469 ◽  
2013 ◽  
Vol 42 (4) ◽  
pp. 470-472 ◽  
Author(s):  
Jeremy Schwark ◽  
Igor Dolgov

Sign in / Sign up

Export Citation Format

Share Document