scholarly journals Crowd control: Reducing individual estimation bias by sharing biased social information

2021 ◽  
Vol 17 (11) ◽  
pp. e1009590
Author(s):  
Bertrand Jayles ◽  
Clément Sire ◽  
Ralf H. J. M. Kurvers

Cognitive biases are widespread in humans and animals alike, and can sometimes be reinforced by social interactions. One prime bias in judgment and decision-making is the human tendency to underestimate large quantities. Previous research on social influence in estimation tasks has generally focused on the impact of single estimates on individual and collective accuracy, showing that randomly sharing estimates does not reduce the underestimation bias. Here, we test a method of social information sharing that exploits the known relationship between the true value and the level of underestimation, and study if it can counteract the underestimation bias. We performed estimation experiments in which participants had to estimate a series of quantities twice, before and after receiving estimates from one or several group members. Our purpose was threefold: to study (i) whether restructuring the sharing of social information can reduce the underestimation bias, (ii) how the number of estimates received affects the sensitivity to social influence and estimation accuracy, and (iii) the mechanisms underlying the integration of multiple estimates. Our restructuring of social interactions successfully countered the underestimation bias. Moreover, we find that sharing more than one estimate also reduces the underestimation bias. Underlying our results are a human tendency to herd, to trust larger estimates than one’s own more than smaller estimates, and to follow disparate social information less. Using a computational modeling approach, we demonstrate that these effects are indeed key to explain the experimental results. Overall, our results show that existing knowledge on biases can be used to dampen their negative effects and boost judgment accuracy, paving the way for combating other cognitive biases threatening collective systems.

2019 ◽  
Author(s):  
Bertrand Jayles ◽  
Ralf Kurvers

Cognitive biases are wide spread in humans and animals alike, and can impair the quality of collectivejudgments and decisions. One such prime bias in judgment is the human tendency to underestimate largequantities. Former research on social influence in estimation tasks has generally focused on the impactof single estimates on individual and collective judgments, showing that randomly sharing estimates doesnot reduce the underestimation bias. Here we test a method of social information sharing that exploits theknown relationship between the true value and the level of underestimation, and study if it can counteractthe underestimation bias. We performed estimation experiments in which participants had to estimate aseries of quantities twice, before and after receiving estimates from one or several group members. Ourpurpose was threefold: to study (i) whether restructuring the sharing of social information can reducethe underestimation bias, (ii) how the number of estimates received affects improvement in accuracy,and (iii) the mechanisms underlying the integration of multiple estimates. Our restructuring of socialinteractions was successful and substantially boosted collective accuracy, countering the underestimationbias. Moreover, we find that sharing more than one estimate also reduces the underestimation bias.Underlying our results are a human tendency to herd, to trust larger estimates than one’s own morethan smaller estimates, and to follow disparate social information less. Using a computational modelingapproach, we demonstrate that these effects are indeed key to explain the experimental results. We usethe model to explore the conditions under which estimation accuracy can be improved further. Overall,our results show that existing knowledge on biases can be used to dampen their negative effects and boostjudgment accuracy, paving the way for combating other cognitive biases threatening collective systems.


2020 ◽  
Author(s):  
Bertrand Jayles ◽  
Clément Sire ◽  
Ralf Kurvers

Information technology has changed our relation to information and to others. In particular, we are increasingly confronted with the opinions and beliefs of peers via the ever-expanding use of social networks and recommender systems. Such large amounts of information challenge people's ability to process them, making aggregated forms of social information increasingly popular. However, it is unclear whether people's judgments and decisions are similar, better, or worse when sharing aggregates versus sharing all the available information.To study this, we performed estimation experiments in which participants estimated various quantities, before and after receiving estimates from other group members. We varied the number of estimates shared, and subjects received either all the shared estimates or their geometric mean. In the latter case, subjects were informed about the number of estimates underlying the mean.Our results show that estimation accuracy improves similarly when sharing all estimates or averages.However, people use social information differently across treatments.First, subjects weigh social information more when receiving averages than when receiving all estimates. Moreover, this weight increases as more estimates underlie the average.Second, subjects weight social information more when it is higher than their initial estimate, than when it is lower. This effect drives second estimates toward higher values, thereby partly counteracting the well-known human tendency to underestimate large quantities. This effect is stronger when receiving all available estimates compared to receiving their average. We introduce a model which reproduces our experimental results well. The model predicts that at larger group sizes, accuracy improves significantly more when sharing averages than when sharing all estimates.


2020 ◽  
Vol 17 (170) ◽  
pp. 20200496
Author(s):  
Bertrand Jayles ◽  
Ramón Escobedo ◽  
Stéphane Cezera ◽  
Adrien Blanchet ◽  
Tatsuya Kameda ◽  
...  

A major problem resulting from the massive use of social media is the potential spread of incorrect information. Yet, very few studies have investigated the impact of incorrect information on individual and collective decisions. We performed experiments in which participants had to estimate a series of quantities, before and after receiving social information. Unbeknownst to them, we controlled the degree of inaccuracy of the social information through ‘virtual influencers’, who provided some incorrect information. We find that a large proportion of individuals only partially follow the social information, thus resisting incorrect information. Moreover, incorrect information can help improve group performance more than correct information, when going against a human underestimation bias. We then design a computational model whose predictions are in good agreement with the empirical data, and sheds light on the mechanisms underlying our results. Besides these main findings, we demonstrate that the dispersion of estimates varies a lot between quantities, and must thus be considered when normalizing and aggregating estimates of quantities that are very different in nature. Overall, our results suggest that incorrect information does not necessarily impair the collective wisdom of groups, and can even be used to dampen the negative effects of known cognitive biases.


2017 ◽  
Vol 114 (47) ◽  
pp. 12620-12625 ◽  
Author(s):  
Bertrand Jayles ◽  
Hye-rin Kim ◽  
Ramón Escobedo ◽  
Stéphane Cezera ◽  
Adrien Blanchet ◽  
...  

In our digital and connected societies, the development of social networks, online shopping, and reputation systems raises the questions of how individuals use social information and how it affects their decisions. We report experiments performed in France and Japan, in which subjects could update their estimates after having received information from other subjects. We measure and model the impact of this social information at individual and collective scales. We observe and justify that, when individuals have little prior knowledge about a quantity, the distribution of the logarithm of their estimates is close to a Cauchy distribution. We find that social influence helps the group improve its properly defined collective accuracy. We quantify the improvement of the group estimation when additional controlled and reliable information is provided, unbeknownst to the subjects. We show that subjects’ sensitivity to social influence permits us to define five robust behavioral traits and increases with the difference between personal and group estimates. We then use our data to build and calibrate a model of collective estimation to analyze the impact on the group performance of the quantity and quality of information received by individuals. The model quantitatively reproduces the distributions of estimates and the improvement of collective performance and accuracy observed in our experiments. Finally, our model predicts that providing a moderate amount of incorrect information to individuals can counterbalance the human cognitive bias to systematically underestimate quantities and thereby improve collective performance.


2020 ◽  
Author(s):  
Guy Theraulaz ◽  
Bertrand Jayles ◽  
Ramon Escobedo ◽  
Stéphane Cezera ◽  
Adrien Blanchet ◽  
...  

A major problem that results from the massive use of social media networks is the possible spread of incorrect information. However, very few studies have investigated the impact of incorrect information on individual and collective decisions. We performed experiments in which participants had to estimate a series of quantities before and after receiving social information. Unbeknownst to them, we controlled the degree of inaccuracy of the social information through “virtual influencers”, who provided some incorrect information. We find that a large proportion of individuals only partially follow the social information, thus resisting incorrect information. Moreover, we find that incorrect social information can help a group perform better when it overestimates the true value, by partly compensating a human underestimation bias. Overall, our results suggest that incorrect information does not necessarily impair the collective wisdom of groups, and can even be used to dampen the negative e↵ects of known cognitive biases.


2021 ◽  
Vol 12 ◽  
Author(s):  
Mathias Jesse ◽  
Dietmar Jannach ◽  
Bartosz Gula

When people search for what to cook for the day, they increasingly use online recipe sites to find inspiration. Such recipe sites often show popular recipes to make it easier to find a suitable choice. However, these popular recipes are not always the healthiest options and can promote an unhealthy lifestyle. Our goal is to understand to what extent it is possible to steer the food selection of people through digital nudging. While nudges have been shown to affect humans' behavior regarding food choices in the physical world, there is little research on the impact of nudges on online food choices. Specifically, it is unclear how different nudges impact (i) the behavior of people, (ii) the time they need to make a decision, and (iii) their satisfaction and confidence with their selection. We investigate the effects of highlighting, defaults, social information, and warnings on the decision-making of online users through two consecutive user studies. Our results show that a hybrid nudge, which both involves setting a default and adding social information, significantly increases the likelihood that a nudged item is selected. Moreover, it may help decreasing the required decision time for participants while having no negative effects on the participant's satisfaction and confidence. Overall, our work provides evidence that nudges can be effective in this domain, but also that the type of a digital nudge matters. Therefore, different nudges should be evaluated in practical applications.


2018 ◽  
Author(s):  
Albert B. Kao ◽  
Andrew M. Berdahl ◽  
Andrew T. Hartnett ◽  
Matthew J. Lutz ◽  
Joseph B. Bak-Coleman ◽  
...  

AbstractAggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities, and across different methods for averaging social information. Utilizing knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds.


2021 ◽  
Vol 18 (180) ◽  
pp. 20210231
Author(s):  
Bertrand Jayles ◽  
Clément Sire ◽  
Ralf H. J. M. Kurvers

The recent developments of social networks and recommender systems have dramatically increased the amount of social information shared in human communities, challenging the human ability to process it. As a result, sharing aggregated forms of social information is becoming increasingly popular. However, it is unknown whether sharing aggregated information improves people’s judgments more than sharing the full available information. Here, we compare the performance of groups in estimation tasks when social information is fully shared versus when it is first averaged and then shared. We find that improvements in estimation accuracy are comparable in both cases. However, our results reveal important differences in subjects’ behaviour: (i) subjects follow the social information more when receiving an average than when receiving all estimates, and this effect increases with the number of estimates underlying the average; (ii) subjects follow the social information more when it is higher than their personal estimate than when it is lower. This effect is stronger when receiving all estimates than when receiving an average. We introduce a model that sheds light on these effects, and confirms their importance for explaining improvements in estimation accuracy in all treatments.


2021 ◽  
Author(s):  
Evan Oughton

Abstract Sharing knowledge enables employees, companies, and the broader industry to reflect on lessons learned from previous projects and increases confidence in predicting outcomes of future projects. The identification of how to become more efficient and how to measure improvements in efficiencies can be revealed when this process works efficiently. The author explores the concept of continuous improvement, by presenting newly developed processes and evaluating subsea intervention case histories in the Gulf of Mexico (GOM). Knowledge sharing exercises can often be overlooked by companies urgently progressing through multiple projects. Best case scenario, lesson learned activities are completed in after action reviews, yielding the product of cumbersome spreadsheets, which can be easily misplaced or forgotten over time. Hess has developed an application for lessons learned, along with a structured process to maintain data quality. Recent interventions have both contributed to capturing new learnings and implementing those already identified into the planning phase of upcoming operations. Time and cost estimation accuracy and operational efficiency initiatives were then evaluated and compared to identify the true value of an effective lessons learned system. Two coiled tubing (CT) interventions trialed Hess's new application and associated processes. The technical challenges of these projects were evaluated and compared to determine if effectively applying lessons learned could lead to continuous improvement. Observations demonstrated significant improvements to the accuracy of time and cost estimates along with enhanced operational performance, leading to time and cost savings. This practice has helped Hess to reduce the overall uncertainty typically associated with subsea well interventions and allowed for continuous improvement of well intervention performance. This paper explores the concept and implementation of an effective system to manage lessons learned to achieve improved operational performance and efficiency. Implementing lessons learned and comparing similar projects, allows an engineer to measure precisely the improvement in efficiency. The paper concludes with the evaluation of the impact an industry wide knowledge sharing database could have, and the potential value it could provide to operators in the GOM region.


2018 ◽  
Vol 15 (141) ◽  
pp. 20180130 ◽  
Author(s):  
Albert B. Kao ◽  
Andrew M. Berdahl ◽  
Andrew T. Hartnett ◽  
Matthew J. Lutz ◽  
Joseph B. Bak-Coleman ◽  
...  

Aggregating multiple non-expert opinions into a collective estimate can improve accuracy across many contexts. However, two sources of error can diminish collective wisdom: individual estimation biases and information sharing between individuals. Here, we measure individual biases and social influence rules in multiple experiments involving hundreds of individuals performing a classic numerosity estimation task. We first investigate how existing aggregation methods, such as calculating the arithmetic mean or the median, are influenced by these sources of error. We show that the mean tends to overestimate, and the median underestimate, the true value for a wide range of numerosities. Quantifying estimation bias, and mapping individual bias to collective bias, allows us to develop and validate three new aggregation measures that effectively counter sources of collective estimation error. In addition, we present results from a further experiment that quantifies the social influence rules that individuals employ when incorporating personal estimates with social information. We show that the corrected mean is remarkably robust to social influence, retaining high accuracy in the presence or absence of social influence, across numerosities and across different methods for averaging social information. Using knowledge of estimation biases and social influence rules may therefore be an inexpensive and general strategy to improve the wisdom of crowds.


Sign in / Sign up

Export Citation Format

Share Document