scholarly journals Counteracting estimation bias and social influence to improve the wisdom of crowds

2018 ◽  
Author(s):  
Albert B. Kao ◽  
Andrew M. Berdahl ◽  
Andrew T. Hartnett ◽  
Matthew J. Lutz ◽  
Joseph B. Bak-Coleman ◽  
...  

AbstractAggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities, and across different methods for averaging social information. Utilizing knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds.

2018 ◽  
Vol 15 (141) ◽  
pp. 20180130 ◽  
Author(s):  
Albert B. Kao ◽  
Andrew M. Berdahl ◽  
Andrew T. Hartnett ◽  
Matthew J. Lutz ◽  
Joseph B. Bak-Coleman ◽  
...  

Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds.


2010 ◽  
Vol 2 (1) ◽  
pp. 112-149 ◽  
Author(s):  
Benjamin Golub ◽  
Matthew O Jackson

We study learning in a setting where agents receive independent noisy signals about the true value of a variable and then communicate in a network. They naïvely update beliefs by repeatedly taking weighted averages of neighbors' opinions. We show that all opinions in a large society converge to the truth if and only if the influence of the most influential agent vanishes as the society grows. We also identify obstructions to this, including prominent groups, and provide structural conditions on the network ensuring efficient learning. Whether agents converge to the truth is unrelated to how quickly consensus is approached. (JEL D83, D85, Z13)


2020 ◽  
pp. 009365022091503
Author(s):  
Bei Yan ◽  
Lian Jian ◽  
Ruqin Ren ◽  
Janet Fulk ◽  
Emily Sidnam-Mauch ◽  
...  

Research on the wisdom of crowds (WOC) identifies two paradoxical effects of communication. The social influence effect hampers the WOC, whereas the collective learning effect improves crowd wisdom. Yet it remains unclear under what conditions such communication impedes or enhances collective wisdom. The current study examined two features characterizing communication in online communities, communication network centralization and shared task experience, and their effect on the WOC. Both these features can serve as indicators of the likelihood that underlying communication may facilitate either social influence or collective learning. With an 8-year longitudinal behavioral-trace data set of 269,871 participants and 1,971 crowds, we showed that communication network centralization negatively affected the WOC. By contrast, shared task experience positively predicted the WOC. Shared task experience also moderated the effect of communication network centralization such that centralized communication networks became more beneficial for crowd performance as shared task experience increased.


2020 ◽  
Author(s):  
Abdullah Almaatouq ◽  
M. Amin Rahimian ◽  
Abdulla Alhajri

Whether, and under what conditions, groups exhibit "crowd wisdom" has been a major focus of research across the social and computational sciences. Much of this work has focused on the role of social influence in promoting the wisdom of the crowd versus leading the crowd astray, resulting in conflicting conclusions about how the social network structure determines the impact of social influence. Here, we demonstrate that it is not enough to consider the network structure in isolation. Using theoretical analysis, numerical simulation, and reanalysis of four experimental datasets (totaling 4,002 human subjects), we find that the wisdom of crowds critically depends on the interaction between (i) the centralization of the social influence network and (ii) the distribution of the initial, individual estimates, i.e., the estimation context. Specifically, we propose a feature of the estimation context that measures the suitability of the crowd to benefit from influence centralization and show its significant predictive powers empirically. By adopting a framework that integrates both the structure of the social influence and the estimation context, we bring previously conflicting results under one theoretical framework and clarify the effects of social influence on the wisdom of crowds.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009590
Author(s):  
Bertrand Jayles ◽  
Clément Sire ◽  
Ralf H. J. M. Kurvers

Cognitive biases are widespread in humans and animals alike, and can sometimes be reinforced by social interactions. One prime bias in judgment and decision-making is the human tendency to underestimate large quantities. Previous research on social influence in estimation tasks has generally focused on the impact of single estimates on individual and collective accuracy, showing that randomly sharing estimates does not reduce the underestimation bias. Here, we test a method of social information sharing that exploits the known relationship between the true value and the level of underestimation, and study if it can counteract the underestimation bias. We performed estimation experiments in which participants had to estimate a series of quantities twice, before and after receiving estimates from one or several group members. Our purpose was threefold: to study (i) whether restructuring the sharing of social information can reduce the underestimation bias, (ii) how the number of estimates received affects the sensitivity to social influence and estimation accuracy, and (iii) the mechanisms underlying the integration of multiple estimates. Our restructuring of social interactions successfully countered the underestimation bias. Moreover, we find that sharing more than one estimate also reduces the underestimation bias. Underlying our results are a human tendency to herd, to trust larger estimates than one’s own more than smaller estimates, and to follow disparate social information less. Using a computational modeling approach, we demonstrate that these effects are indeed key to explain the experimental results. Overall, our results show that existing knowledge on biases can be used to dampen their negative effects and boost judgment accuracy, paving the way for combating other cognitive biases threatening collective systems.


Author(s):  
Pavlin Mavrodiev ◽  
Frank Schweitzer

AbstractWe propose an agent-based model of collective opinion formation to study the wisdom of crowds under social influence. The opinion of an agent is a continuous positive value, denoting its subjective answer to a factual question. The wisdom of crowds states that the average of all opinions is close to the truth, i.e., the correct answer. But if agents have the chance to adjust their opinion in response to the opinions of others, this effect can be destroyed. Our model investigates this scenario by evaluating two competing effects: (1) agents tend to keep their own opinion (individual conviction), (2) they tend to adjust their opinion if they have information about the opinions of others (social influence). For the latter, two different regimes (full information vs. aggregated information) are compared. Our simulations show that social influence only in rare cases enhances the wisdom of crowds. Most often, we find that agents converge to a collective opinion that is even farther away from the true answer. Therefore, under social influence the wisdom of crowds can be systematically wrong.


2019 ◽  
Vol 116 (22) ◽  
pp. 10717-10722 ◽  
Author(s):  
Joshua Becker ◽  
Ethan Porter ◽  
Damon Centola

Theories in favor of deliberative democracy are based on the premise that social information processing can improve group beliefs. While research on the “wisdom of crowds” has found that information exchange can increase belief accuracy on noncontroversial factual matters, theories of political polarization imply that groups will become more extreme—and less accurate—when beliefs are motivated by partisan political bias. A primary concern is that partisan biases are associated not only with more extreme beliefs, but also with a diminished response to social information. While bipartisan networks containing both Democrats and Republicans are expected to promote accurate belief formation, politically homogeneous networks are expected to amplify partisan bias and reduce belief accuracy. To test whether the wisdom of crowds is robust to partisan bias, we conducted two web-based experiments in which individuals answered factual questions known to elicit partisan bias before and after observing the estimates of peers in a politically homogeneous social network. In contrast to polarization theories, we found that social information exchange in homogeneous networks not only increased accuracy but also reduced polarization. Our results help generalize collective intelligence research to political domains.


2020 ◽  
Author(s):  
Abdullah Almaatouq ◽  
M. Amin Rahimian ◽  
Abdulla Alhajri

Sign in / Sign up

Export Citation Format

Share Document