scholarly journals Deciphering human decision rules in motion discrimination

Author(s):  
Jinfeng Huang ◽  
Alexander Yu ◽  
Yifeng Zhou ◽  
Zili Liu

AbstractWe investigated the eight decision rules for a same-different task, as summarized in Petrov (Psychonomic Bulletin & Review, 16(6), 1011–1025, 2009). These rules, including the differencing (DF) rule and the optimal independence rule, are all based on the standard model in signal detection theory. Each rule receives two stimulus values as inputs and uses one or two decision criteria. We proved that the false alarm rate p(F) ≤ 1/2 for four of the rules. We also conducted a same-different rating experiment on motion discrimination (n = 54), with 4∘ or 8∘ directional difference. We found that the human receiver operating characteristic (ROC) spanned its full range [0,1] in p(F), thus rejecting these four rules. The slope of the human Z-ROC was also < 1, further confirming that the independence rule was not used. We subsequently fitted in the four-dimensional (pAA, pAB, pBA, pBB) space the human data to the remaining four rules—DF and likelihood ratio rules, each with one or two criteria, where pXY = p(responding “different” given stimulus sequence XY). We found that, using residual distribution analysis, only the two criteria DF rule (DF2) could account for the human data.

2020 ◽  
Vol 7 (Supplement_1) ◽  
pp. S162-S163
Author(s):  
Guillermo Rodriguez-Nava ◽  
Daniela Patricia Trelles-Garcia ◽  
Maria Adriana Yanez-Bello ◽  
Chul Won Chung ◽  
Sana Chaudry ◽  
...  

Abstract Background As the ongoing COVID-19 pandemic develops, there is a need for prediction rules to guide clinical decisions. Previous reports have identified risk factors using statistical inference model. The primary goal of these models is to characterize the relationship between variables and outcomes, not to make predictions. In contrast, the primary purpose of machine learning is obtaining a model that can make repeatable predictions. The objective of this study is to develop decision rules tailored to our patient population to predict ICU admissions and death in patients with COVID-19. Methods We used a de-identified dataset of hospitalized adults with COVID-19 admitted to our community hospital between March 2020 and June 2020. We used a Random Forest algorithm to build the prediction models for ICU admissions and death. Random Forest is one of the most powerful machine learning algorithms; it leverages the power of multiple decision trees, randomly created, for making decisions. Results 313 patients were included; 237 patients were used to train each model, 26 were used for testing, and 50 for validation. A total of 16 variables, selected according to their availability in the Emergency Department, were fit into the models. For the survival model, the combination of age &gt;57 years, the presence of altered mental status, procalcitonin ≥3.0 ng/mL, a respiratory rate &gt;22, and a blood urea nitrogen &gt;32 mg/dL resulted in a decision rule with an accuracy of 98.7% in the training model, 73.1% in the testing model, and 70% in the validation model (Table 1, Figure 1). For the ICU admission model, the combination of age &lt; 82 years, a systolic blood pressure of ≤94 mm Hg, oxygen saturation of ≤93%, a lactate dehydrogenase &gt;591 IU/L, and a lactic acid &gt;1.5 mmol/L resulted in a decision rule with an accuracy of 99.6% in the training model, 80.8% in the testing model, and 82% in the validation model (Table 2, Figure 2). Table 1. Measures of Performance in Predicting Inpatient Mortality Conclusion We created decision rules using machine learning to predict ICU admission or death in patients with COVID-19. Although there are variables previously described with statistical inference, these decision rules are customized to our patient population; furthermore, we can continue to train the models fitting more data with new patients to create even more accurate prediction rules. Figure 1. Receiver Operating Characteristic (ROC) Curve for Inpatient Mortality Table 2. Measures of Performance in Predicting Intensive Care Unit Admission Figure 2. Receiver Operating Characteristic (ROC) Curve for Intensive Care Unit Admission Disclosures All Authors: No reported disclosures


Author(s):  
Özgür Şimşek

The lexicographic decision rule is one of the simplest methods of choosing among decision alternatives. It is based on a simple priority ranking of the attributes available. According to the lexicographic decision rule, a decision alternative is better than another alternative if and only if it is better than the other alternative in the most important attribute on which the two alternatives differ. In other words, the lexicographic decision rule does not allow trade-offs among the various attributes. For example, if quality is considered to be more important than cost, no difference in price can compensate for a difference in quality: The lexicographic decision rule chooses the item with the best quality regardless of the cost. Over the years, the lexicographic decision rule has been compared to various statistical learning methods, including multiple linear regression, support vector machines, decision trees, and random forests. The results show that the lexicographic decision rule can sometimes compete remarkably well with more complex statistical methods, and even outperform them, despite its naively simple structure. These results have stimulated a rich scientific literature on why, and under what conditions, lexicographic decision rules yield accurate decisions. Due to the simplicity of its decision process, its fast execution time, and the robustness of its performance in various decision environments, the lexicographic decision rule is considered to be a plausible model of human decision making. In particular, the lexicographic decision rule is put forward as a model of how the human mind implements bounded rationality to make accurate decisions when information is scarce, time is short, and computational capacity is limited.


Author(s):  
Nan Hu

Business operators and stakeholders often need to make decisions such as choosing between A and B, or between yes and no, and these decisions are often made by using a classification tool or a set of decision rules. Decision tools usually include scoring systems, predictive models, and quantitative test modalities. In this chapter, the authors introduce the receiver operating characteristic (ROC) curves and demonstrate, through an example of bank decision on granting loans to customers, how ROC curves can be used to evaluate decision making for information-based decision making. In addition, an extension to time-dependent ROC analysis is introduced in this chapter. The authors conclude this chapter by illustrating the application of ROC analysis in information-based decision making and providing the future trends of this topic.


2020 ◽  
Vol 9 (19) ◽  
Author(s):  
Mei‐Sing Ong ◽  
Jeffrey G. Klann ◽  
Kueiyu Joshua Lin ◽  
Bradley A. Maron ◽  
Shawn N. Murphy ◽  
...  

Background Real‐world healthcare data are an important resource for epidemiologic research. However, accurate identification of patient cohorts—a crucial first step underpinning the validity of research results—remains a challenge. We developed and evaluated claims‐based case ascertainment algorithms for pulmonary hypertension (PH), comparing conventional decision rules with state‐of‐the‐art machine‐learning approaches. Methods and Results We analyzed an electronic health record‐Medicare linked database from two large academic tertiary care hospitals (years 2007–2013). Electronic health record charts were reviewed to form a gold standard cohort of patients with (n=386) and without PH (n=164). Using health encounter data captured in Medicare claims (including patients’ demographics, diagnoses, medications, and procedures), we developed and compared 2 approaches for identifying patients with PH: decision rules and machine‐learning algorithms using penalized lasso regression, random forest, and gradient boosting machine. The most optimal rule‐based algorithm—having ≥3 PH‐related healthcare encounters and having undergone right heart catheterization—attained an area under the receiver operating characteristic curve of 0.64 (sensitivity, 0.75; specificity, 0.48). All 3 machine‐learning algorithms outperformed the most optimal rule‐based algorithm ( P <0.001). A model derived from the random forest algorithm achieved an area under the receiver operating characteristic curve of 0.88 (sensitivity, 0.87; specificity, 0.70), and gradient boosting machine achieved comparable results (area under the receiver operating characteristic curve, 0.85; sensitivity, 0.87; specificity, 0.70). Penalized lasso regression achieved an area under the receiver operating characteristic curve of 0.73 (sensitivity, 0.70; specificity, 0.68). Conclusions Research‐grade case identification algorithms for PH can be derived and rigorously validated using machine‐learning algorithms. Simple decision rules commonly applied in published literature performed poorly; more complex rule‐based algorithms may potentially address the limitation of this approach. PH research using claims data would be considerably strengthened through the use of validated algorithms for cohort ascertainment.


2021 ◽  
Author(s):  
Julian Skirzyński ◽  
Frederic Becker ◽  
Falk Lieder

AbstractWhen making decisions, people often overlook critical information or are overly swayed by irrelevant information. A common approach to mitigate these biases is to provide decision-makers, especially professionals such as medical doctors, with decision aids, such as decision trees and flowcharts. Designing effective decision aids is a difficult problem. We propose that recently developed reinforcement learning methods for discovering clever heuristics for good decision-making can be partially leveraged to assist human experts in this design process. One of the biggest remaining obstacles to leveraging the aforementioned methods for improving human decision-making is that the policies they learn are opaque to people. To solve this problem, we introduce AI-Interpret: a general method for transforming idiosyncratic policies into simple and interpretable descriptions. Our algorithm combines recent advances in imitation learning and program induction with a new clustering method for identifying a large subset of demonstrations that can be accurately described by a simple, high-performing decision rule. We evaluate our new AI-Interpret algorithm and employ it to translate information-acquisition policies discovered through metalevel reinforcement learning. The results of three large behavioral experiments showed that providing the decision rules generated by AI-Interpret as flowcharts significantly improved people’s planning strategies and decisions across three different classes of sequential decision problems. Moreover, our fourth experiment revealed that this approach is significantly more effective at improving human decision-making than training people by giving them performance feedback. Finally, a series of ablation studies confirmed that our AI-Interpret algorithm was critical to the discovery of interpretable decision rules and that it is ready to be applied to other reinforcement learning problems. We conclude that the methods and findings presented in this article are an important step towards leveraging automatic strategy discovery to improve human decision-making. The code for our algorithm and the experiments is available at https://github.com/RationalityEnhancement/InterpretableStrategyDiscovery.


Author(s):  
Nan Hu

Business operators and stakeholders often need to make decisions such as choosing between A and B, or between yes and no, and these decisions are often made by using a classification tool or a set of decision rules. Decision tools usually include scoring systems, predictive models, and quantitative test modalities. In this chapter, we introduce the receiver operating characteristic (ROC) curves and demonstrate, through an example of bank decision on granting loans to customers, how ROC curves can be used to evaluate decision making for information based decision making. In addition, an extension to time-dependent ROC analysis is introduced in this chapter. We conclude this chapter by illustrating the application of ROC analysis in information based decision making and providing the future trends of this topic.


2021 ◽  
Vol 2 (1) ◽  
Author(s):  
Fei Yang ◽  
Xiaolin Meng ◽  
Jiming Guo ◽  
Debao Yuan ◽  
Ming Chen

AbstractThe tropospheric delay is a significant error source in Global Navigation Satellite System (GNSS) positioning and navigation. It is usually projected into zenith direction by using a mapping function. It is particularly important to establish a model that can provide stable and accurate Zenith Tropospheric Delay (ZTD). Because of the regional accuracy difference and poor stability of the traditional ZTD models, this paper proposed two methods to refine the Hopfield and Saastamoinen ZTD models. One is by adding annual and semi-annual periodic terms and the other is based on Back-Propagation Artificial Neutral Network (BP-ANN). Using 5-year data from 2011 to 2015 collected at 67 GNSS reference stations in China and its surrounding regions, the four refined models were constructed. The tropospheric products at these GNSS stations were derived from the site-wise Vienna Mapping Function 1 (VMP1). The spatial analysis, temporal analysis, and residual distribution analysis for all the six models were conducted using the data from 2016 to 2017. The results show that the refined models can effectively improve the accuracy compared with the traditional models. For the Hopfield model, the improvement for the Root Mean Square Error (RMSE) and bias reached 24.5/49.7 and 34.0/52.8 mm, respectively. These values became 8.8/26.7 and 14.7/28.8 mm when the Saastamoinen model was refined using the two methods. This exploration is conducive to GNSS navigation and positioning and GNSS meteorology by providing more accurate tropospheric prior information.


2021 ◽  
pp. 263145412098734
Author(s):  
Sarthak Gaurav

Behavioural economics is a thriving field that offers descriptive models of human decision-making that deviate from the standard model of decision-making in economics. This article presents insights from behavioural economics that can help address dynamic inconsistency, that is, time-inconsistency problems of employees and inform incentive design strategies. The author argues that lessons from behavioural economics can be applied to design solutions that can transform HR practices. HR managers and leaders stand to benefit from the emerging evidence from the lab and field in behavioural economics that calls for a rethinking of the conventional understanding of human behaviour.


Sign in / Sign up

Export Citation Format

Share Document