Modeling Bias Reduction Strategies in a Biased Agent

Author(s):  
Jaelle Scheuerman ◽  
Dina Acklin

Costly mistakes can occur when decision makers rely on intuition or learned biases to make decisions. To better understand the cognitive processes that lead to bias and develop strategies to combat it, we developed an intelligent agent using the cognitive architecture, ACT-R 7.0. The agent simulates a human participating in a decision making task designed to assess the effectiveness of bias reduction strategies. The agent's performance is compared to that of human participants completing a similar task. Similar results support the underlying cognitive theories and reveal limitations of reducing bias in human decision making. This should provide insights for designing intelligent agents that can reason about bias while supporting decision makers.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-26
Author(s):  
Friederike Wall

Coordination among decision-makers of an organization, each responsible for a certain partition of an overall decision-problem, is of crucial relevance with respect to the overall performance obtained. Among the challenges of coordination in distributed decision-making systems (DDMS) is to understand how environmental conditions like, for example, the complexity of the decision-problem to be solved, the problem’s predictability and its dynamics shape the adaptation of coordination mechanisms. These challenges apply to DDMS resided by human decision-makers like firms as well as to systems of artificial agents as studied in the domain of multiagent systems (MAS). It is well known that coordination for increasing decision-problems and, accordingly, growing organizations is in a particular tension between shaping the search for new solutions and setting appropriate constraints to deal with increasing size and intraorganizational complexity. Against this background, the paper studies the adaptation of coordination in the course of growing decision-making organizations. For this, an agent-based simulation model based on the framework of NK fitness landscapes is employed. The study controls for different levels of complexity of the overall decision-problem, different strategies of search for new solutions, and different levels of cost of effort to implement new solutions. The results suggest that, with respect to the emerging coordination mode, complexity subtly interferes with the search strategy employed and cost of effort. In particular, results support the conjecture that increasing complexity leads to more hierarchical coordination. However, the search strategy shapes the predominance of hierarchy in favor of granting more autonomy to decentralized decision-makers. Moreover, the study reveals that the cost of effort for implementing new solutions in conjunction with the search strategy may remarkably affect the emerging form of coordination. This could explain differences in prevailing coordination modes across different branches or technologies or could explain the emergence of contextually inferior modes of coordination.


Author(s):  
Sahinya Susindar ◽  
Mahnoosh Sadeghi ◽  
Lea Huntington ◽  
Andrew Singer ◽  
Thomas K. Ferris

Classical methods for eliciting emotional responses, including the use of emotionally-charged pictures and films, have been used to study the influence of affective states on human decision-making and other cognitive processes. Advanced multisensory display systems, such as Virtual Reality (VR) headsets, offer a degree of immersion that may support more reliable elicitation of emotional experiences than less-immersive displays, and can provide a powerful yet relatively safe platform for inducing negative emotions such as fear and anger. However, it is not well understood how the presentation medium influences the degree to which emotions are elicited. In this study, emotionally-charged stimuli were introduced via two display configurations – on a desktop computer and on a VR system –and were evaluated based on performance in a decision task. Results show that the use of VR can be a more effective method for emotion elicitation when study decision-making under the influence of emotions.


Author(s):  
Norman Warner ◽  
Michael Letsky ◽  
Michael Cowen

The purpose of this paper is to describe a cognitive model of team collaboration emphasizing the human decision-making processes used during team collaboration. The descriptive model includes the domain characteristics, collaboration stages, meta- and macro cognitive processes and the mechanisms for achieving the stages and cognitive processes. Two experiments were designed to provide empirical data on the validity of the collaboration stages and cognitive processes of the model. Both face-to-face and asynchronous, distributed teams demonstrated behavior that supports the existence of the collaboration stages along with seven cognitive processes.


2017 ◽  
Author(s):  
Erdem Pulcu ◽  
Masahiko Haruno

AbstractInteracting with others to decide how finite resources should be allocated between parties which may have competing interests is an important part of social life. Considering that not all of our proposals to others are always accepted, the outcomes of such social interactions are, by their nature, probabilistic and risky. Here, we highlight cognitive processes related to value computations in human social interactions, based on mathematical modelling of the proposer behavior in the Ultimatum Game. Our results suggest that the perception of risk is an overarching process across non-social and social decision-making, whereas nonlinear weighting of others’ acceptance probabilities is unique to social interactions in which others’ valuation processes needs to be inferred. Despite the complexity of social decision-making, human participants make near-optimal decisions by dynamically adjusting their decision parameters to the changing social value orientation of their opponents through influence by multidimensional inferences they make about those opponents (e.g. how prosocial they think their opponent is relative to themselves).


Author(s):  
Pascal D. König ◽  
Georg Wenzelburger

AbstractThe promise of algorithmic decision-making (ADM) lies in its capacity to support or replace human decision-making based on a superior ability to solve specific cognitive tasks. Applications have found their way into various domains of decision-making—and even find appeal in the realm of politics. Against the backdrop of widespread dissatisfaction with politicians in established democracies, there are even calls for replacing politicians with machines. Our discipline has hitherto remained surprisingly silent on these issues. The present article argues that it is important to have a clear grasp of when and how ADM is compatible with political decision-making. While algorithms may help decision-makers in the evidence-based selection of policy instruments to achieve pre-defined goals, bringing ADM to the heart of politics, where the guiding goals are set, is dangerous. Democratic politics, we argue, involves a kind of learning that is incompatible with the learning and optimization performed by algorithmic systems.


2021 ◽  
Author(s):  
Payam Piray ◽  
Roshan Cools ◽  
Ivan Toni

Human decisions are known to be strongly influenced by the manner in which options are presented, the "framing effect". Here, we ask whether decision-makers are also influenced by how advice from other knowledgeable agents are framed, a "social framing effect". Concretely, do students learn better from a teacher who often frames advice by emphasizing appetitive outcomes, or do they learn better from another teacher who usually emphasizes avoiding options that can be harmful to their progress? We study the computational and neural mechanisms by which framing of advice affect decision-making, social learning, and trust. We found that human participants are more likely to trust and follow an adviser who often uses an appetitive frame for advice compared with another one who often uses an aversive frame. This social framing effect is implemented through a modulation of the integrative abilities of the ventromedial prefrontal cortex. At the time of choice, this region combines information learned via personal experiences of reward with social information, but the combination differs depending on the social framing of advice. Personally-acquired information is weighted more strongly when dealing with an adviser who uses an aversive frame. The findings suggest that social advice is systematically incorporated into our decisions, while being affected by biases similar to those influencing individual value-based learning.


2020 ◽  
Author(s):  
Tsvetomira Dumbalska ◽  
Vickie Li ◽  
Konstantinos Tsetsos ◽  
Christopher Summerfield

Human decisions can be biased by irrelevant information. For example, choices between two preferred alternatives can be swayed by a third option that is inferior or unavailable. Previous work has identified three classic biases, known as the attraction, similarity and compromise effects, which arise during choices between economic alternatives defined by two attributes. However, the reliability, interrelationship, and computational origin of these three biases has been controversial. Here, a large cohort of human participants made incentive-compatible choices among assets that varied in price and quality. Instead of focusing on the three classic effects, we sampled decoy stimuli exhaustively across bidimensional multi-attribute space and constructed a full map of decoy influence on choices between two otherwise preferred target items. Our analysis revealed that the decoy influence map was highly structured even beyond the three classic biases. We identified a very simple model that can fully reproduce the decoy influence map and capture its variability in individual participants. This model reveals that the three decoy effects are not distinct phenomena but are all special cases of a more general principle, by which attribute values are repulsed away from the context provided by rival options. The model helps understand why the biases are typically correlated across participants and allows us to validate a new prediction about their interrelationship. This work helps to clarify the origin of three of the most widely-studied biases in human decision-making.


2021 ◽  
Vol 3 ◽  
pp. 27-46
Author(s):  
Sonja Utz ◽  
Lara Wolfers ◽  
Anja Göritz

In times of the COVID-19 pandemic, difficult decisions such as the distribution of ventilators must be made. For many of these decisions, humans could team up with algorithms; however, people often prefer human decision-makers. We examined the role of situational (morality of the scenario; perspective) and individual factors (need for leadership; conventionalism) for algorithm preference in a preregistered online experiment with German adults (n = 1,127). As expected, algorithm preference was lowest in the most moral-laden scenario. The effect of perspective (i.e., decision-makers vs. decision targets) was only significant in the most moral scenario. Need for leadership predicted a stronger algorithm preference, whereas conventionalism was related to weaker algorithm preference. Exploratory analyses revealed that attitudes and knowledge also mattered, stressing the importance of individual factors.


2018 ◽  
Vol 3 (1) ◽  
pp. 1-12
Author(s):  
Thais Spiegel ◽  
Ana Carolina P V Silva

In the study of decision-making, the classical view of behavioral appropriateness or rationality was challenged by neuro and psychological reasons. The “bounded rationality” theory proposed that cognitive limitations lead decision-makers to construct simplified models for dealing with the world. Doctors' decisions, for example, are made under uncertain conditions, as without knowing precisely whether a diagnosis is correct or whether a treatment will actually cure a patient, and often under time constraints. Using cognitive heuristics are neither good nor bad per se, if applied in situations to which they have been adapted to be helpful. Therefore, this text contextualizes the human decision-making perspective to find descriptions that adhere more closely to the human decision-making process. Then, based on a literature review of cognition during decision-making, particularly in healthcare context, it addresses a model that identifies the roles of attention, categorization, memory, emotion, and their inter-relations, during the decision-making process.


Sign in / Sign up

Export Citation Format

Share Document