scholarly journals Do low-confidence individuals decrease group judgments’ accuracy? Investigations in terms of the wisdom of crowds framework

Author(s):  
Masaru Shirasuna ◽  
Hidehito Honda

Abstract In group judgments in a binary choice task, the judgments of individuals with low confidence (i.e., they feel that the judgment was not correct) may be regarded as unreliable. Previous studies have shown that aggregating individuals’ diverse judgments can lead to high accuracy in group judgments, a phenomenon known as the wisdom of crowds. Therefore, if low-confidence individuals make diverse judgments between individuals and the mean of accuracy of their judgments is above the chance level (.50), it is likely that they will not always decrease the accuracy of group judgments. To investigate this issue, the present study conducted behavioral experiments using binary choice inferential tasks, and computer simulations of group judgments by manipulating group sizes and individuals’ confidence levels. Results revealed that (I) judgment patterns were highly similar between individuals regardless of their confidence levels; (II) the low-confidence group could make judgments as accurate as the high-confidence group, as the group size increased; and (III) even if there were low-confidence individuals in a group, they generally did not inhibit group judgment accuracy. The results suggest the usefulness of low-confidence individuals’ judgments in a group and provide practical implications for real-world group judgments.

2021 ◽  
Author(s):  
Masaru Shirasuna ◽  
Hidehito Honda

In group judgments in a binary choice task, the judgments of individuals with low confidence (i.e., they feel that the judgment was not correct) may be regarded as unreliable. Previous studies have shown that aggregating individuals’ diverse judgments can lead to high accuracy in group judgments, a phenomenon known as the wisdom of crowds. Therefore, if low-confidence individuals make diverse judgments between individuals and the mean of accuracy of their judgments is above the chance level (.50), it is likely that they will not always decrease the accuracy of group judgments. To investigate this issue, the present study conducted behavioral experiments using binary choice inferential tasks, and computer simulations of group judgments by manipulating group sizes and individuals’ confidence levels. Results revealed that (I) judgment patterns were highly similar between individuals regardless of their confidence levels; (II) the low-confidence group could make judgments as accurate as the high-confidence group, as the group size increased; and (III) even if there were low-confidence individuals in a group, they generally did not inhibit group judgment accuracy. The results suggest the usefulness of low-confidence individuals’ judgments in a group and provide practical implications for real-world group judgments.


2017 ◽  
Vol 2017 ◽  
pp. 1-9
Author(s):  
Xiao-Lei Wang ◽  
Da-Gang Lu

The mean seismic probability risk model has widely been used in seismic design and safety evaluation of critical infrastructures. In this paper, the confidence levels analysis and error equations derivation of the mean seismic probability risk model are conducted. It has been found that the confidence levels and error values of the mean seismic probability risk model are changed for different sites and that the confidence levels are low and the error values are large for most sites. Meanwhile, the confidence levels of ASCE/SEI 43-05 design parameters are analyzed and the error equation of achieved performance probabilities based on ASCE/SEI 43-05 is also obtained. It is found that the confidence levels for design results obtained using ASCE/SEI 43-05 criteria are not high, which are less than 95%, while the high confidence level of the uniform risk could not be achieved using ASCE/SEI 43-05 criteria and the error values between risk model with target confidence level and mean risk model using ASCE/SEI 43-05 criteria are large for some sites. It is suggested that the seismic risk model considering high confidence levels instead of the mean seismic probability risk model should be used in the future.


2018 ◽  
Vol 21 (3) ◽  
pp. 376-384 ◽  
Author(s):  
Karen Kelly ◽  
Carl James Schwarz ◽  
Ricardo Gomez ◽  
Kim Marsh

Purpose The purpose of this paper is to present an empirical study on the time needed to load and disburse cash using bill validators on slot machines and stand-alone cash dispensers in casinos in British Columbia under a Ticket In Ticket Out (TITO) system. Design/methodology/approach Testing took place over two days, using 18 machines. The results were extrapolated to estimate the approximate time required to process $1,000,000 with different average bill amounts in the cash mix and three different bill validator machines in common use. The average value per bill using the cash mix used by the public in the casino was $33.11 [standard error (SE) $2.11]. Findings The mean time/accepted note ranged from 4.12 to 9.65 s, depending on bill validator type. This implies that the time needed to load $1,000,000 onto credit slips using bill validators on slot machines ranges from 35 to 81 h, excluding rest breaks and other breaks. The time needed to redeem $1,000,000 is estimated to be 3 h. Practical/implications The implications of these finding for illicit actors to successfully launder large amounts of cash are discussed. Given the time needed to physically handle the cash, and other control systems currently in use in casinos in British Columbia, processing large amounts of cash using bill validators on slot machines would require a highly organized team that would find it difficult to elude detection. Originality/value The trial results provide a baseline estimate to be used going forward when investigating or proposing money laundering methodologies that include slot machines.


2017 ◽  
Vol 2 (1) ◽  
pp. 89-104 ◽  
Author(s):  
Guoqiang Liang ◽  
Haiyan Hou ◽  
Zhigang Hu ◽  
Fu Huang ◽  
Yajie Wang ◽  
...  

Abstract Purpose Research fronts build on recent work, but using times cited as a traditional indicator to detect research fronts will inevitably result in a certain time lag. This study attempts to explore the effects of usage count as a new indicator to detect research fronts in shortening the time lag of classic indicators in research fronts detection. Design/methodology/approach An exploratory study was conducted where the new indicator “usage count” was compared to the traditional citation count, “times cited,” in detecting research fronts of the regenerative medicine domain. An initial topic search of the term “regenerative medicine” returned 10,553 records published between 2000 and 2015 in the Web of Science (WoS). We first ranked these records with usage count and times cited, respectively, and selected the top 2,000 records for each. We then performed a co-citation analysis in order to obtain the citing papers of the co-citation clusters as the research fronts. Finally, we compared the average publication year of the citing papers as well as the mean cited year of the co-citation clusters. Findings The citing articles detected by usage count tend to be published more recently compared with times cited within the same research front. Moreover, research fronts detected by usage count tend to be within the last two years, which presents a higher immediacy and real-time feature compared to times cited. There is approximately a three-year time span among the mean cited years (known as “intellectual base”) of all clusters generated by usage count and this figure is about four years in the network of times cited. In comparison to times cited, usage count is a dynamic and instant indicator. Research limitations We are trying to find the cutting-edge research fronts, but those generated based on co-citations may refer to the hot research fronts. The usage count of older highly cited papers was not taken into consideration, because the usage count indicator released by WoS only reflects usage logs after February 2013. Practical implications The article provides a new perspective on using usage count as a new indicator to detect research fronts. Originality/value Usage count can greatly shorten the time lag in research fronts detection, which would be a promising complementary indicator in detection of the latest research fronts.


Sign in / Sign up

Export Citation Format

Share Document