scholarly journals Point-estimating observer models for latent cause detection

2021 ◽  
Vol 17 (10) ◽  
pp. e1009159
Author(s):  
Jennifer Laura Lee ◽  
Wei Ji Ma

The spatial distribution of visual items allows us to infer the presence of latent causes in the world. For instance, a spatial cluster of ants allows us to infer the presence of a common food source. However, optimal inference requires the integration of a computationally intractable number of world states in real world situations. For example, optimal inference about whether a common cause exists based on N spatially distributed visual items requires marginalizing over both the location of the latent cause and 2N possible affiliation patterns (where each item may be affiliated or non-affiliated with the latent cause). How might the brain approximate this inference? We show that subject behaviour deviates qualitatively from Bayes-optimal, in particular showing an unexpected positive effect of N (the number of visual items) on the false-alarm rate. We propose several “point-estimating” observer models that fit subject behaviour better than the Bayesian model. They each avoid a costly computational marginalization over at least one of the variables of the generative model by “committing” to a point estimate of at least one of the two generative model variables. These findings suggest that the brain may implement partially committal variants of Bayesian models when detecting latent causes based on complex real world data.

2021 ◽  
Author(s):  
Jennifer L Lee ◽  
Wei Ji Ma

The spatial distribution of visual items allows us to infer the presence of latent causes in the world. For instance, a spatial cluster of ants allows us to infer the presence of a common food source. However, optimal inference requires the integration of a computationally intractable number of world states in real world situations. For example, optimal inference about whether a common cause exists based on N spatially distributed visual items requires marginalizing over both the location of the latent cause and 2N possible affiliation patterns (where each item may be affiliated or non-affiliated with the latent cause). How might the brain approximate this inference? We show that subject behaviour deviates qualitatively from Bayes-optimal, in particular showing an unexpected positive effect of N (the number of visual items) on the false-alarm rate. We propose several “point-estimating” observer models that fit subject behaviour better than the Bayesian model. They each avoid a costly computational marginalization over at least one of the variables of the generative model by “committing” to a point estimate of at least one of the two generative model variables. These findings suggest that the brain may implement partially committal variants of Bayesian models when detecting latent causes based on complex real world data.


2020 ◽  
Vol 57 (6) ◽  
pp. 1152-1168
Author(s):  
Francesca Valsesia ◽  
Davide Proserpio ◽  
Joseph C. Nunes

Marketers commonly seed information about products and brands through individuals believed to be influential on social media, which often involves enlisting micro influencers, users who have accumulated thousands as opposed to millions of followers (i.e., other users who have subscribed to see that individual’s posts). Given an abundance of micro influencers to choose from, cues that help distinguish more versus less effective influencers on social media are of increasing interest to marketers. The authors identify one such cue: the number of users the prospective influencer is following. Using a combination of real-world data analysis and controlled lab experiments, they show that following fewer others, conditional on having a substantial number of followers, has a positive effect on a social media user’s perceived influence. Further, the authors find greater perceived influence impacts engagement with the content shared in terms of other users exhibiting more favorable attitudes toward it (i.e., likes) and a greater propensity to spread it (i.e., retweets). They identify a theoretically important mechanism underlying the effect: following fewer others conveys greater autonomy, a signal of influence in the eyes of others.


2019 ◽  
Vol 113 (5-6) ◽  
pp. 495-513 ◽  
Author(s):  
Thomas Parr ◽  
Karl J. Friston

Abstract Active inference is an approach to understanding behaviour that rests upon the idea that the brain uses an internal generative model to predict incoming sensory data. The fit between this model and data may be improved in two ways. The brain could optimise probabilistic beliefs about the variables in the generative model (i.e. perceptual inference). Alternatively, by acting on the world, it could change the sensory data, such that they are more consistent with the model. This implies a common objective function (variational free energy) for action and perception that scores the fit between an internal model and the world. We compare two free energy functionals for active inference in the framework of Markov decision processes. One of these is a functional of beliefs (i.e. probability distributions) about states and policies, but a function of observations, while the second is a functional of beliefs about all three. In the former (expected free energy), prior beliefs about outcomes are not part of the generative model (because they are absorbed into the prior over policies). Conversely, in the second (generalised free energy), priors over outcomes become an explicit component of the generative model. When using the free energy function, which is blind to future observations, we equip the generative model with a prior over policies that ensure preferred (i.e. priors over) outcomes are realised. In other words, if we expect to encounter a particular kind of outcome, this lends plausibility to those policies for which this outcome is a consequence. In addition, this formulation ensures that selected policies minimise uncertainty about future outcomes by minimising the free energy expected in the future. When using the free energy functional—that effectively treats future observations as hidden states—we show that policies are inferred or selected that realise prior preferences by minimising the free energy of future expectations. Interestingly, the form of posterior beliefs about policies (and associated belief updating) turns out to be identical under both formulations, but the quantities used to compute them are not.


2019 ◽  
Vol 64 ◽  
pp. 1-20 ◽  
Author(s):  
Alireza Farhadi ◽  
Mohammad Ghodsi ◽  
Mohammad Taghi Hajiaghayi ◽  
Sébastien Lahaie ◽  
David Pennock ◽  
...  

We study fair allocation of indivisible goods to agents with unequal entitlements. Fair allocation has been the subject of many studies in both divisible and indivisible settings. Our emphasis is on the case where the goods are indivisible and agents have unequal entitlements. This problem is a generalization of the work by Procaccia and Wang (2014) wherein the agents are assumed to be symmetric with respect to their entitlements. Although Procaccia and Wang show an almost fair (constant approximation) allocation exists in their setting, our main result is in sharp contrast to their observation. We show that, in some cases with n agents, no allocation can guarantee better than 1/n approximation of a fair allocation when the entitlements are not necessarily equal. Furthermore, we devise a simple algorithm that ensures a 1/n approximation guarantee. Our second result is for a restricted version of the problem where the valuation of every agent for each good is bounded by the total value he wishes to receive in a fair allocation. Although this assumption might seem without loss of generality, we show it enables us to find a 1/2 approximation fair allocation via a greedy algorithm. Finally, we run some experiments on real-world data and show that, in practice, a fair allocation is likely to exist. We also support our experiments by showing positive results for two stochastic variants of the problem, namely stochastic agents and stochastic items.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Yange Sun ◽  
Zhihai Wang ◽  
Yang Bai ◽  
Honghua Dai ◽  
Saeid Nahavandi

It is common in real-world data streams that previously seen concepts will reappear, which suggests a unique kind of concept drift, known as recurring concepts. Unfortunately, most of existing algorithms do not take full account of this case. Motivated by this challenge, a novel paradigm was proposed for capturing and exploiting recurring concepts in data streams. It not only incorporates a distribution-based change detector for handling concept drift but also captures recurring concept by storing recurring concepts in a classifier graph. The possibility of detecting recurring drifts allows reusing previously learnt models and enhancing the overall learning performance. Extensive experiments on both synthetic and real-world data streams reveal that the approach performs significantly better than the state-of-the-art algorithms, especially when concepts reappear.


2021 ◽  
pp. 1-22
Author(s):  
Riichiro Mizoguchi ◽  
Stefano Borgo

yamato sharply distinguishes itself from other existing upper ontologies in the following respects. (1) Most importantly, yamato is designed with both engineering and philosophical minds. (2) yamato is based on a sophisticated theory of roles, given that the world is full of roles. (3) yamato has a tenable theory of functions which helps to deal with artifacts effectively. (4) Information is a ‘content-bearing’ entity and it differs significantly from the entities that philosophers have traditionally discussed. Taking into account the modern society in which a flood of information occurs, yamato has a sophisticated theory of informational objects (representations). (5) Quality and quantity are carefully organized for the sake of greater interoperability of real-world data. (6) The philosophical contribution of yamato includes a theory of objects, processes, and events. Those features are illustrated with several case studies. These features lead to the intensive application of yamato in some domains such as biomedicine and learning engineering.


Author(s):  
Chao Qian ◽  
Guiying Li ◽  
Chao Feng ◽  
Ke Tang

The subset selection problem that selects a few items from a ground set arises in many applications such as maximum coverage, influence maximization, sparse regression, etc. The recently proposed POSS algorithm is a powerful approximation solver for this problem. However, POSS requires centralized access to the full ground set, and thus is impractical for large-scale real-world applications, where the ground set is too large to be stored on one single machine. In this paper, we propose a distributed version of POSS (DPOSS) with a bounded approximation guarantee. DPOSS can be easily implemented in the MapReduce framework. Our extensive experiments using Spark, on various real-world data sets with size ranging from thousands to millions, show that DPOSS can achieve competitive performance compared with the centralized POSS, and is almost always better than the state-of-the-art distributed greedy algorithm RandGreeDi.


2018 ◽  
Vol 23 (4) ◽  
pp. 422-432 ◽  
Author(s):  
Barbara Nuñez‐Valdovinos ◽  
Alberto Carmona‐Bayonas ◽  
Paula Jimenez‐Fonseca ◽  
Jaume Capdevila ◽  
Ángel Castaño‐Pascual ◽  
...  

Psihologija ◽  
2013 ◽  
Vol 46 (4) ◽  
pp. 377-396 ◽  
Author(s):  
Michael Ramscar

The world?s languages tend to exhibit a suffixing preference, adding inflections to the ends of words, rather than the beginning of them. Previous works has suggested that this apparently universal preference arises out of the constraints imposed by general purpose learning mechanisms in the brain, and specifically, the kinds of information structures that facilitate discrimination learning (St Clair, Monaghan, & Ramscar, 2009). Here I show that learning theory predicts that prefixes and suffixes will tend to promote different kinds of learning: prefixes will facilitate the learning of the probabilities that any following elements in a sequence will follow a label, whereas suffixing will promote the abstraction of common dimensions from a set of preceding elements. The results of the artificial language learning experiment support this analysis: When words are learned with consistent prefixes, participants learned the relationship between the prefixes and the noun labels, and the relationship between the noun labels and the objects associated with them, better than when words were learned with consistent suffixes. When words were learned with consistent suffixes, participants treated similarly suffixed nouns as being more similar than nouns learned with consistent prefixes. It appears that while prefixes tend to make items more predictable and to make veridical discriminations easier, suffixes tended to make items cohere more, increasing the similarities between them.


Sign in / Sign up

Export Citation Format

Share Document