scholarly journals Eye Movements Reveal Memory Processes During Similarity- and Rule-Based Decision Making

2014 ◽  
Author(s):  
Bettina von Helversen ◽  
Agnes Scholz ◽  
Jorg Rieskamp
Cognition ◽  
2015 ◽  
Vol 136 ◽  
pp. 228-246 ◽  
Author(s):  
Agnes Scholz ◽  
Bettina von Helversen ◽  
Jörg Rieskamp

2021 ◽  
Vol 31 (3) ◽  
pp. 1-26
Author(s):  
Aravind Balakrishnan ◽  
Jaeyoung Lee ◽  
Ashish Gaurav ◽  
Krzysztof Czarnecki ◽  
Sean Sedwards

Reinforcement learning (RL) is an attractive way to implement high-level decision-making policies for autonomous driving, but learning directly from a real vehicle or a high-fidelity simulator is variously infeasible. We therefore consider the problem of transfer reinforcement learning and study how a policy learned in a simple environment using WiseMove can be transferred to our high-fidelity simulator, W ise M ove . WiseMove is a framework to study safety and other aspects of RL for autonomous driving. W ise M ove accurately reproduces the dynamics and software stack of our real vehicle. We find that the accurately modelled perception errors in W ise M ove contribute the most to the transfer problem. These errors, when even naively modelled in WiseMove , provide an RL policy that performs better in W ise M ove than a hand-crafted rule-based policy. Applying domain randomization to the environment in WiseMove yields an even better policy. The final RL policy reduces the failures due to perception errors from 10% to 2.75%. We also observe that the RL policy has significantly less reliance on velocity compared to the rule-based policy, having learned that its measurement is unreliable.


2021 ◽  
Vol 20 (01) ◽  
pp. 2150013
Author(s):  
Mohammed Abu-Arqoub ◽  
Wael Hadi ◽  
Abdelraouf Ishtaiwi

Associative Classification (AC) classifiers are of substantial interest due to their ability to be utilised for mining vast sets of rules. However, researchers over the decades have shown that a large number of these mined rules are trivial, irrelevant, redundant, and sometimes harmful, as they can cause decision-making bias. Accordingly, in our paper, we address these challenges and propose a new novel AC approach based on the RIPPER algorithm, which we refer to as ACRIPPER. Our new approach combines the strength of the RIPPER algorithm with the classical AC method, in order to achieve: (1) a reduction in the number of rules being mined, especially those rules that are largely insignificant; (2) a high level of integration among the confidence and support of the rules on one hand and the class imbalance level in the prediction phase on the other hand. Our experimental results, using 20 different well-known datasets, reveal that the proposed ACRIPPER significantly outperforms the well-known rule-based algorithms RIPPER and J48. Moreover, ACRIPPER significantly outperforms the current AC-based algorithms CBA, CMAR, ECBA, FACA, and ACPRISM. Finally, ACRIPPER is found to achieve the best average and ranking on the accuracy measure.


Author(s):  
Ekaterina Jussupow ◽  
Kai Spohrer ◽  
Armin Heinzl ◽  
Joshua Gawlitza

Systems based on artificial intelligence (AI) increasingly support physicians in diagnostic decisions, but they are not without errors and biases. Failure to detect those may result in wrong diagnoses and medical errors. Compared with rule-based systems, however, these systems are less transparent and their errors less predictable. Thus, it is difficult, yet critical, for physicians to carefully evaluate AI advice. This study uncovers the cognitive challenges that medical decision makers face when they receive potentially incorrect advice from AI-based diagnosis systems and must decide whether to follow or reject it. In experiments with 68 novice and 12 experienced physicians, novice physicians with and without clinical experience as well as experienced radiologists made more inaccurate diagnosis decisions when provided with incorrect AI advice than without advice at all. We elicit five decision-making patterns and show that wrong diagnostic decisions often result from shortcomings in utilizing metacognitions related to decision makers’ own reasoning (self-monitoring) and metacognitions related to the AI-based system (system monitoring). As a result, physicians fall for decisions based on beliefs rather than actual data or engage in unsuitably superficial evaluation of the AI advice. Our study has implications for the training of physicians and spotlights the crucial role of human actors in compensating for AI errors.


Sign in / Sign up

Export Citation Format

Share Document