Naive Learning Through Probability Overmatching
Naïve Learning in a Binary Action, Social Network Environment In “Naïve Learning Through Probability Overmatching,” I. Arieli, Y. Babichenko, and M. Mueller-Frank consider an environment where privately informed agents select a binary action repeatedly observing the past actions of their neighbors in a social network. Rational inference has been shown to be exceedingly complex in this environment. Instead, this paper focuses on boundedly rational agents that form beliefs according to discretized DeGroot updating and apply a decision rule that assigns a (mixed) action to each belief. It is shown that naïve learning, where the long run actions of all agents are optimal given their pooled private information, can be achieved in any strongly connected network if beliefs satisfy a high level of inertia and the decision rule coincides with probability overmatching. The main difference to existing naïve learning results is that here it is shown to hold (1) for binary rather than uncountable action spaces and (2) even for network and information structures where Bayesian agents fail to learn.