scholarly journals Decision prioritization and causal reasoning in decision hierarchies

2021 ◽  
Vol 17 (12) ◽  
pp. e1009688
Author(s):  
Ariel Zylberberg

From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target’s location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with 107 latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants’ behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.

2021 ◽  
Author(s):  
Ariel Zylberberg

From cooking a meal to finding a route to a destination, many real life decisions can be decomposed into a hierarchy of sub-decisions. In a hierarchy, choosing which decision to think about requires planning over a potentially vast space of possible decision sequences. To gain insight into how people decide what to decide on, we studied a novel task that combines perceptual decision making, active sensing and hierarchical and counterfactual reasoning. Human participants had to find a target hidden at the lowest level of a decision tree. They could solicit information from the different nodes of the decision tree to gather noisy evidence about the target's location. Feedback was given only after errors at the leaf nodes and provided ambiguous evidence about the cause of the error. Despite the complexity of task (with $10^7$ latent states) participants were able to plan efficiently in the task. A computational model of this process identified a small number of heuristics of low computational complexity that accounted for human behavior. These heuristics include making categorical decisions at the branching points of the decision tree rather than carrying forward entire probability distributions, discarding sensory evidence deemed unreliable to make a choice, and using choice confidence to infer the cause of the error after an initial plan failed. Plans based on probabilistic inference or myopic sampling norms could not capture participants' behavior. Our results show that it is possible to identify hallmarks of heuristic planning with sensing in human behavior and that the use of tasks of intermediate complexity helps identify the rules underlying human ability to reason over decision hierarchies.


Author(s):  
Manoj Srinivasan ◽  
Syed T. Mubarrat ◽  
Quentin Humphrey ◽  
Thomas Chen ◽  
Kieran Binkley ◽  
...  

In this study, we developed a low-cost simulated testbed of a physically interactive virtual reality (VR) system and evaluated its efficacy as an occupational virtual trainer for human-robot collaborative (HRC) tasks. The VR system could be implemented in industrial training applications for sensorimotor skill acquisitions and identifying potential task-, robot-, and human-induced hazards in the industrial environments. One of the challenges in designing and implementing such simulation testbed is the effective integration of virtual and real objects and environment, including human movement biomechanics. Therefore, this study aimed to compare the movement kinematics (joint angles) and kinetics (center of pressure) of the human participants while performing pick-and-place lifting tasks with and without using a physically interactive VR testbed. Results showed marginal differences in human movement kinematics and kinetics between real and virtual environment tasks, suggesting the effective transfer of training benefits from VR to real-life situations.


2012 ◽  
Vol 13 (1) ◽  
pp. 243-256 ◽  
Author(s):  
James O’Connor ◽  

The hypothetical scenarios generally known as trolley problems have become widespread in recent moral philosophy. They invariably require an agent to choose one of a strictly limited number of options, all of them bad. Although they don’t always involve trolleys / trams, and are used to make a wide variety of points, what makes it justified to speak of a distinctive “trolley method” is the characteristic assumption that the intuitive reactions that all these artificial situations elicit constitute an appropriate guide to real-life moral reasoning. I dispute this assumption by arguing that trolley cases inevitably constrain the supposed rescuers into behaving in ways that clearly deviate from psychologically healthy, and morally defensible, human behavior. Through this focus on a generally overlooked aspect of trolley theorizing – namely, the highly impoverished role invariably allotted to the would-be rescuer in these scenarios – I aim to challenge the complacent twin assumptions of advocates of the trolley method that this approach to moral reasoning has practical value, and is in any case innocuous. Neither assumption is true.


2020 ◽  
Vol 10 (15) ◽  
pp. 5333
Author(s):  
Anam Manzoor ◽  
Waqar Ahmad ◽  
Muhammad Ehatisham-ul-Haq ◽  
Abdul Hannan ◽  
Muhammad Asif Khan ◽  
...  

Emotions are a fundamental part of human behavior and can be stimulated in numerous ways. In real-life, we come across different types of objects such as cake, crab, television, trees, etc., in our routine life, which may excite certain emotions. Likewise, object images that we see and share on different platforms are also capable of expressing or inducing human emotions. Inferring emotion tags from these object images has great significance as it can play a vital role in recommendation systems, image retrieval, human behavior analysis and, advertisement applications. The existing schemes for emotion tag perception are based on the visual features, like color and texture of an image, which are poorly affected by lightning conditions. The main objective of our proposed study is to address this problem by introducing a novel idea of inferring emotion tags from the images based on object-related features. In this aspect, we first created an emotion-tagged dataset from the publicly available object detection dataset (i.e., “Caltech-256”) using subject evaluation from 212 users. Next, we used a convolutional neural network-based model to automatically extract the high-level features from object images for recognizing nine (09) emotion categories, such as amusement, awe, anger, boredom, contentment, disgust, excitement, fear, and sadness. Experimental results on our emotion-tagged dataset endorse the success of our proposed idea in terms of accuracy, precision, recall, specificity, and F1-score. Overall, the proposed scheme achieved an accuracy rate of approximately 85% and 79% using top-level and bottom-level emotion tagging, respectively. We also performed a gender-based analysis for inferring emotion tags and observed that male and female subjects have discernment in emotions perception concerning different object categories.


J ◽  
2019 ◽  
Vol 2 (2) ◽  
pp. 102-115 ◽  
Author(s):  
Christian Montag ◽  
Harald Baumeister ◽  
Christopher Kannen ◽  
Rayna Sariyska ◽  
Eva-Maria Meßner ◽  
...  

With the advent of the World Wide Web, the smartphone and the Internet of Things, not only society but also the sciences are rapidly changing. In particular, the social sciences can profit from these digital developments, because now scientists have the power to study real-life human behavior via smartphones and other devices connected to the Internet of Things on a large-scale level. Although this sounds easy, scientists often face the problem that no practicable solution exists to participate in such a new scientific movement, due to a lack of an interdisciplinary network. If so, the development time of a new product, such as a smartphone application to get insights into human behavior takes an enormous amount of time and resources. Given this problem, the present work presents an easy way to use a smartphone application, which can be applied by social scientists to study a large range of scientific questions. The application provides measurements of variables via tracking smartphone–use patterns, such as call behavior, application use (e.g., social media), GPS and many others. In addition, the presented Android-based smartphone application, called Insights, can also be used to administer self-report questionnaires for conducting experience sampling and to search for co-variations between smartphone usage/smartphone data and self-report data. Of importance, the present work gives a detailed overview on how to conduct a study using an application such as Insights, starting from designing the study, installing the application to analyzing the data. In the present work, server requirements and privacy issues are also discussed. Furthermore, first validation data from personality psychology are presented. Such validation data are important in establishing trust in the applied technology to track behavior. In sum, the aim of the present work is (i) to provide interested scientists a short overview on how to conduct a study with smartphone app tracking technology, (ii) to present the features of the designed smartphone application and (iii) to demonstrate its validity with a proof of concept study, hence correlating smartphone usage with personality measures.


2016 ◽  
Vol 2 (1) ◽  
pp. 00077-2015 ◽  
Author(s):  
Esther I. Metting ◽  
Johannes C.C.M. in ’t Veen ◽  
P.N. Richard Dekhuijzen ◽  
Ellen van Heijst ◽  
Janwillem W.H. Kocks ◽  
...  

The aim of this study was to develop and explore the diagnostic accuracy of a decision tree derived from a large real-life primary care population.Data from 9297 primary care patients (45% male, mean age 53±17 years) with suspicion of an obstructive pulmonary disease was derived from an asthma/chronic obstructive pulmonary disease (COPD) service where patients were assessed using spirometry, the Asthma Control Questionnaire, the Clinical COPD Questionnaire, history data and medication use. All patients were diagnosed through the Internet by a pulmonologist. The Chi-squared Automatic Interaction Detection method was used to build the decision tree. The tree was externally validated in another real-life primary care population (n=3215).Our tree correctly diagnosed 79% of the asthma patients, 85% of the COPD patients and 32% of the asthma–COPD overlap syndrome (ACOS) patients. External validation showed a comparable pattern (correct: asthma 78%, COPD 83%, ACOS 24%).Our decision tree is considered to be promising because it was based on real-life primary care patients with a specialist's diagnosis. In most patients the diagnosis could be correctly predicted. Predicting ACOS, however, remained a challenge. The total decision tree can be implemented in computer-assisted diagnostic systems for individual patients. A simplified version of this tree can be used in daily clinical practice as a desk tool.


2018 ◽  
Vol 41 (1) ◽  
pp. 96-112 ◽  
Author(s):  
Evy Rombaut ◽  
Marie-Anne Guerry

Purpose This paper aims to question whether the available data in the human resources (HR) system could result in reliable turnover predictions without supplementary survey information. Design/methodology/approach A decision tree approach and a logistic regression model for analysing turnover were introduced. The methodology is illustrated on a real-life data set of a Belgian branch of a private company. The model performance is evaluated by the area under the ROC curve (AUC) measure. Findings It was concluded that data in the personnel system indeed lead to valuable predictions of turnover. Practical implications The presented approach brings determinants of voluntary turnover to the surface. The results yield useful information for HR departments. Where the logistic regression results in a turnover probability at the individual level, the decision tree makes it possible to ascertain employee groups that are at risk for turnover. With the data set-based approach, each company can, immediately, ascertain their own turnover risk. Originality/value The study of a data-driven approach for turnover investigation has not been done so far.


2021 ◽  
Author(s):  
Lara Kirfel ◽  
David Lagnado

Did Tom’s use of nuts in the dish cause Billy’s allergic reaction? According to counterfactual theories of causation, an agent is judged a cause to the extent that their action made a difference to the outcome (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2020; Gerstenberg, Halpern, & Tenenbaum, 2015; Halpern, 2016; Hitchcock & Knobe, 2009). In this paper, we argue for the integration of epistemic states into current counterfactual accounts of causation. In the case of ignorant causal agents, we demonstrate that people’s counterfactual reasoning primarily targets the agent’s epistemic state – what the agent doesn’t know –, and their epistemic actions – what they could have done to know – rather than the agent’s actual causal action. In four experiments, we show that people’s causal judgment as well as their reasoning about alternatives is sensitive to the epistemic conditions of a causal agent: Knowledge vs. ignorance (Experiment 1), self-caused vs. externally caused ignorance (Experiment 2), the number of epistemic actions (Experiment 3), and the epistemic context (Experiment 4). We see two advantages in integrating epistemic states into causal models and counterfactual frameworks. First, assuming the intervention on indirect, epistemic causes might allow us to explain why people attribute decreased causality to ignorant vs. knowing causal agents. Moreover, causal agents’ epistemic states pick out those factors that can be controlled or manipulated in order to achieve desirable future outcomes, reflecting the forward-looking dimension of causality. We discuss our findings in the broader context of moral and causal cognition.


Author(s):  
Innocent Boyle Eraikhuemen ◽  
Gerald Ikechukwu Onwuka ◽  
Bassa Shiwaye Yakura ◽  
Hassan Allahde

Recently, researchers have shown much interest in developing new continuous probability distributions by adding one or two parameter(s) to the some existing baseline distributions. This act has been beneficial to the field of statistical theory especially in modeling of real life situations. Also, the exponentiated family as used in developing new distributions is an efficient method proposed and studied for defining more flexible continuous probability distributions for modeling real life data. In this study, the method of exponentiation has been used to develop a new distribution called “Exponentiated odd Lindley inverse exponential distribution”. Some properties of the proposed distribution and estimation of its unknown parameters has been done using the method of maximum likelihood estimation and its application to real life datasets. The new model has been applied to infant mortality rate and mother-to-child HIV transmission rate. The results of these two applications reveal that the proposed model is a better model compared to the other fitted existing models by some selection information criteria.


2017 ◽  
Vol 23 (2) ◽  
pp. 573-597
Author(s):  
István Kádár

In a software system, most of the runtime failures may come to light only during test execution, and this may have a very high cost. To help address this problem, a symbolic execution engine called RTEHunter, which has been developed at the Department of Software Engineering at the University of Szeged, is able to detect runtime errors (such as null pointer dereference, bad array indexing, division by zero) in Java programs without actually running the program in a real-life environment. Applying the theory of symbolic execution, RTEHunter builds a tree, called a symbolic execution tree, composed of all the possible execution paths of the program. RTEHunter detects runtime issues by traversing the symbolic execution tree and if a certain condition is fulfilled the engine reports an issue. However, as the number of execution paths increases exponentially with the number of branching points, the exploration of the whole symbolic execution tree becomes impossible in practice. To overcome this problem, different kinds of constraints can be set up over the tree. E.g. the number of symbolic states, the depth of the execution tree, or the time consumption could be restricted. Our goal in this study is to find the optimal parametrization of RTEHunter in terms of the maximum number of states, maximum depth of the symbolic execution tree and search strategy in order to find more runtime issues in a shorter time. Results on three open-source Java systems demonstrate that more runtime issues can be detected in the 0 to 60 basic block-depth levels than in deeper ones within the same time frame. We also developed two novel search strategies for traversing the tree based on the number of null pointer references in the program and on linear regression that performs better than the default depth-first search strategy.


Sign in / Sign up

Export Citation Format

Share Document