Wisdom of Crowds Effect

Author(s):  
Jack B. Soll ◽  
Albert E. Mannes ◽  
Richard P. Larrick
Keyword(s):  
Author(s):  
Bahador Bahrami

Evidence for and against the idea that “two heads are better than one” is abundant. This chapter considers the contextual conditions and social norms that predict madness or wisdom of crowds to identify the adaptive value of collective decision-making beyond increased accuracy. Similarity of competence among members of a collective impacts collective accuracy, but interacting individuals often seem to operate under the assumption that they are equally competent even when direct evidence suggest the opposite and dyadic performance suffers. Cross-cultural data from Iran, China, and Denmark support this assumption of similarity (i.e., equality bias) as a sensible heuristic that works most of the time and simplifies social interaction. Crowds often trade off accuracy for other collective benefits such as diffusion of responsibility and reduction of regret. Consequently, two heads are sometimes better than one, but no-one holds the collective accountable, not even for the most disastrous of outcomes.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Benoît de Courson ◽  
Léo Fitouchi ◽  
Jean-Philippe Bouchaud ◽  
Michael Benzaquen

AbstractThe ability to learn from others (social learning) is often deemed a cause of human species success. But if social learning is indeed more efficient (whether less costly or more accurate) than individual learning, it raises the question of why would anyone engage in individual information seeking, which is a necessary condition for social learning’s efficacy. We propose an evolutionary model solving this paradox, provided agents (i) aim not only at information quality but also vie for audience and prestige, and (ii) do not only value accuracy but also reward originality—allowing them to alleviate herding effects. We find that under some conditions (large enough success rate of informed agents and intermediate taste for popularity), both social learning’s higher accuracy and the taste for original opinions are evolutionarily-stable, within a mutually beneficial division of labour-like equilibrium. When such conditions are not met, the system most often converges towards mutually detrimental equilibria.


Synthese ◽  
2021 ◽  
Author(s):  
Justin Sytsma ◽  
Ryan Muldoon ◽  
Shaun Nichols
Keyword(s):  

2019 ◽  
Vol 70 ◽  
pp. 460-478 ◽  
Author(s):  
Jian-Wu Bi ◽  
Yang Liu ◽  
Zhi-Ping Fan ◽  
Jin Zhang

Author(s):  
Jiyi Li ◽  
Yasushi Kawase ◽  
Yukino Baba ◽  
Hisashi Kashima

Quality assurance is one of the most important problems in crowdsourcing and human computation, and it has been extensively studied from various aspects. Typical approaches for quality assurance include unsupervised approaches such as introducing task redundancy (i.e., asking the same question to multiple workers and aggregating their answers) and supervised approaches such as using worker performance on past tasks or injecting qualification questions into tasks in order to estimate the worker performance. In this paper, we propose to utilize the worker performance as a global constraint for inferring the true answers. The existing semi-supervised approaches do not consider such use of qualification questions. We also propose to utilize the constraint as a regularizer combined with existing statistical aggregation methods. The experiments using heterogeneous multiple-choice questions demonstrate that the performance constraint not only has the power to estimate the ground truths when used by itself, but also boosts the existing aggregation methods when used as a regularizer.


Sign in / Sign up

Export Citation Format

Share Document