scholarly journals On the Value of Alert Systems and Gentle Rule Enforcement in Addressing Pandemics

2020 ◽  
Vol 11 ◽  
Author(s):  
Yefim Roth ◽  
Ori Plonsky ◽  
Edith Shalev ◽  
Ido Erev

The COVID-19 pandemic poses a major challenge to policy makers on how to encourage compliance to social distancing and personal protection rules. This paper compares the effectiveness of two policies that aim to increase the frequency of responsible health behavior using smartphone-tracking applications. The first involves enhanced alert capabilities, which remove social externalities and protect the users from others’ reckless behavior. The second adds a rule enforcement mechanism that reduces the users’ benefit from reckless behavior. Both strategies should be effective if agents are expected-value maximizers, risk averse, and behave in accordance with cumulative prospect theory (Tversky and Kahneman, 1992) or in accordance with the Cognitive Hierarchy model (Camerer et al., 2004). A multi-player trust-game experiment was designed to compare the effectiveness of the two policies. The results reveal a substantial advantage to the enforcement application, even one with occasional misses. The enhanced-alert strategy was completely ineffective. The findings align with the small samples hypothesis, suggesting that decision makers tend to select the options that lead to the best payoff in a small sample of similar past experiences. In the current context, the tendency to rely on a small sample appears to be more consequential than other deviations from rational choice.

2020 ◽  
Author(s):  
Yefim Roth ◽  
Ori Plonsky ◽  
Edith Shalev ◽  
Ido Erev

The recent COVID-19 pandemic poses a challenge to policy makers on how to make the population adhere to the social distancing and personal protection rules. The current research compares two ways by which tracking smartphone applications can be used to reduce the frequency of reckless behaviors that spread pandemics. The first involves the addition of alerts that increase the users’ benefit from responsible behavior. The second involves the addition of a rule enforcement mechanism that reduces the users’ benefit from reckless behavior. The effectiveness of the two additions is examined in an experimental study that focuses on an environment in which both additions are expected to be effective under the assumptions that the agents are expected-value maximizers, risk averse, behave in accordance with cumulative prospect theory (Tversky & Kahneman, 1992), or behave in accordance with the Cognitive Hierarchy model (Camerer, Ho & Chong, 2004). The results reveal a substantial advantage to the enforcement application. Indeed, the alerts addition was completely ineffective. The findings align with the small samples hypothesis, suggesting that decision makers tend to select the options that led to the best payoff in a small sample of similar past experiences. In the current context the tendency to rely on a small sample appears to be more consequential than other deviations from rational choice.


2018 ◽  
Vol 2018 ◽  
pp. 1-10
Author(s):  
Lifeng Wu ◽  
Yan Chen

To deal with the forecasting with small samples in the supply chain, three grey models with fractional order accumulation are presented. Human judgment of future trends is incorporated into the order number of accumulation. The output of the proposed model will provide decision-makers in the supply chain with more forecasting information for short time periods. The results of practical real examples demonstrate that the model provides remarkable prediction performances compared with the traditional forecasting model.


1994 ◽  
Vol 33 (02) ◽  
pp. 180-186 ◽  
Author(s):  
H. Brenner ◽  
O. Gefeller

Abstract:The traditional concept of describing the validity of a diagnostic test neglects the presence of chance agreement between test result and true (disease) status. Sensitivity and specificity, as the fundamental measures of validity, can thus only be considered in conjunction with each other to provide an appropriate basis for the evaluation of the capacity of the test to discriminate truly diseased from truly undiseased subjects. In this paper, chance-corrected analogues of sensitivity and specificity are presented as supplemental measures of validity, which pay attention to the problem of chance agreement and offer the opportunity to be interpreted separately. While recent proposals of chance-correction techniques, suggested by several authors in this context, lead to measures which are dependent on disease prevalence, our method does not share this major disadvantage. We discuss the extension of the conventional ROC-curve approach to chance-corrected measures of sensitivity and specificity. Furthermore, point and asymptotic interval estimates of the parameters of interest are derived under different sampling frameworks for validation studies. The small sample behavior of the estimates is investigated in a simulation study, leading to a logarithmic modification of the interval estimate in order to hold the nominal confidence level for small samples.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Christian Dagenais

Abstract Background Despite the increased emphasis placed on the use of evidence for policy development, relatively few initiatives have been developed to support evidence-informed decision-making, especially in West Africa. Moreover, studies examining the conditions under which policy-makers use research-based evidence are still scarce, but they show that their attitudes and opinions about research are one of the main determinants of such use. In February 2017, Burkina Faso’s Minister of Health planned to create a unit to promote evidence-informed decision-making within the ministry. Before the unit was set up, documenting the attitudes towards research at the highest levels of his Ministry appeared profitable to the unit’s planning. Method Individual interviews were conducted by the author with 14 actors positioned to consider evidence during decision-making from the Burkina Faso’s Minister of health cabinet. An interview grid was used to explore several themes such as attitudes towards research, obstacles and facilitators to research use, example of research use in decision-making and finally, ways to increase decision-makers’ participation in knowledge transfer activities. Interviews were partially transcribed and analysed by the author. Results The results show a mixed attitude towards research and relatively little indication of research use reported by respondents. Important obstacles were identified: evidence inaccessibility, lack of implementation guidelines, absence of clear communication strategy and studies’ lack of relevance for decision-making. Many suggestions were proposed such as raising awareness, improving access and research communication and prioritizing interactions with researchers. Respondents agree with the low participation of decision-makers in knowledge transfer activities: more leadership from the senior officials was suggested and greater awareness of the importance of their presence. Conclusions The conclusion presents avenues for reflection and action to increase the potential impact of the knowledge transfer unit planned within the Ministry of Health of Burkina Faso. This innovative initiative will be impactful if the obstacles identified in this study and policy-makers’ preferences and needs are taken into account during its development and implementation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Florent Le Borgne ◽  
Arthur Chatton ◽  
Maxime Léger ◽  
Rémi Lenain ◽  
Yohann Foucher

AbstractIn clinical research, there is a growing interest in the use of propensity score-based methods to estimate causal effects. G-computation is an alternative because of its high statistical power. Machine learning is also increasingly used because of its possible robustness to model misspecification. In this paper, we aimed to propose an approach that combines machine learning and G-computation when both the outcome and the exposure status are binary and is able to deal with small samples. We evaluated the performances of several methods, including penalized logistic regressions, a neural network, a support vector machine, boosted classification and regression trees, and a super learner through simulations. We proposed six different scenarios characterised by various sample sizes, numbers of covariates and relationships between covariates, exposure statuses, and outcomes. We have also illustrated the application of these methods, in which they were used to estimate the efficacy of barbiturates prescribed during the first 24 h of an episode of intracranial hypertension. In the context of GC, for estimating the individual outcome probabilities in two counterfactual worlds, we reported that the super learner tended to outperform the other approaches in terms of both bias and variance, especially for small sample sizes. The support vector machine performed well, but its mean bias was slightly higher than that of the super learner. In the investigated scenarios, G-computation associated with the super learner was a performant method for drawing causal inferences, even from small sample sizes.


2011 ◽  
Vol 6 (2) ◽  
pp. 252-277 ◽  
Author(s):  
Stephen T. Ziliak

AbstractStudent's exacting theory of errors, both random and real, marked a significant advance over ambiguous reports of plant life and fermentation asserted by chemists from Priestley and Lavoisier down to Pasteur and Johannsen, working at the Carlsberg Laboratory. One reason seems to be that William Sealy Gosset (1876–1937) aka “Student” – he of Student'st-table and test of statistical significance – rejected artificial rules about sample size, experimental design, and the level of significance, and took instead an economic approach to the logic of decisions made under uncertainty. In his job as Apprentice Brewer, Head Experimental Brewer, and finally Head Brewer of Guinness, Student produced small samples of experimental barley, malt, and hops, seeking guidance for industrial quality control and maximum expected profit at the large scale brewery. In the process Student invented or inspired half of modern statistics. This article draws on original archival evidence, shedding light on several core yet neglected aspects of Student's methods, that is, Guinnessometrics, not discussed by Ronald A. Fisher (1890–1962). The focus is on Student's small sample, economic approach to real error minimization, particularly in field and laboratory experiments he conducted on barley and malt, 1904 to 1937. Balanced designs of experiments, he found, are more efficient than random and have higher power to detect large and real treatment differences in a series of repeated and independent experiments. Student's world-class achievement poses a challenge to every science. Should statistical methods – such as the choice of sample size, experimental design, and level of significance – follow the purpose of the experiment, rather than the other way around? (JEL classification codes: C10, C90, C93, L66)


2016 ◽  
Vol 41 (5) ◽  
pp. 472-505 ◽  
Author(s):  
Elizabeth Tipton ◽  
Kelly Hallberg ◽  
Larry V. Hedges ◽  
Wendy Chan

Background: Policy makers and researchers are frequently interested in understanding how effective a particular intervention may be for a specific population. One approach is to assess the degree of similarity between the sample in an experiment and the population. Another approach is to combine information from the experiment and the population to estimate the population average treatment effect (PATE). Method: Several methods for assessing the similarity between a sample and population currently exist as well as methods estimating the PATE. In this article, we investigate properties of six of these methods and statistics in the small sample sizes common in education research (i.e., 10–70 sites), evaluating the utility of rules of thumb developed from observational studies in the generalization case. Result: In small random samples, large differences between the sample and population can arise simply by chance and many of the statistics commonly used in generalization are a function of both sample size and the number of covariates being compared. The rules of thumb developed in observational studies (which are commonly applied in generalization) are much too conservative given the small sample sizes found in generalization. Conclusion: This article implies that sharp inferences to large populations from small experiments are difficult even with probability sampling. Features of random samples should be kept in mind when evaluating the extent to which results from experiments conducted on nonrandom samples might generalize.


PEDIATRICS ◽  
1989 ◽  
Vol 83 (3) ◽  
pp. A72-A72
Author(s):  
Student

The believer in the law of small numbers practices science as follows: 1. He gambles his research hypotheses on small samples without realizing that the odds against him are unreasonably high. He overestimates power. 2. He has undue confidence in early trends (e.g., the data of the first few subjects) and in the stability of observed patterns (e.g., the number and identity of significant results). He overestimates significance. 3. In evaluating replications, his or others', he has unreasonably high expectations about the replicability of significant results. He underestimates the breadth of confidence intervals. 4. He rarely attributes a deviation of results from expectations to sampling variability, because he finds a causal "explanation" for any discrepancy. Thus, he has little opportunity to recognize sampling variation in action. His belief in the law of small numbers, therefore, will forever remain intact.


Author(s):  
Victoria A. Beard ◽  
Diana Mitlin

This paper highlights challenges of water access in towns and cities of the global South and explores potential policy responses. These challenges are not new, although, we argue that they have been underestimated by policy makers due to a focus on global data, thus, resulting in decision makers paying insufficient attention to these problems. Policies need to be based on a more accurate assessment of challenges, specifically the need for continuous and affordable water service, and the need to provide services to informal settlements. We share findings from research on 15 cities across Latin America, Asia, and Africa.


Sign in / Sign up

Export Citation Format

Share Document