human decision
Recently Published Documents


TOTAL DOCUMENTS

1014
(FIVE YEARS 382)

H-INDEX

49
(FIVE YEARS 9)

NeuroImage ◽  
2022 ◽  
Vol 246 ◽  
pp. 118780
Author(s):  
Vasiliki Liakoni ◽  
Marco P. Lehmann ◽  
Alireza Modirshanechi ◽  
Johanni Brea ◽  
Antoine Lutti ◽  
...  

Author(s):  
Sandra Notaro ◽  
Gianluca Grilli

AbstractScientific evidence suggests that emotions affect actual human decision-making, particularly in highly emotionally situations such as human-wildlife interactions. In this study we assess the role of fear on preferences for wildlife conservation, using a discrete choice experiment. The sample was split into two treatment groups and a control. In the treatment groups the emotion of fear towards wildlife was manipulated using two different pictures of a wolf, one fearful and one reassuring, which were presented to respondents during the experiment. Results were different for the two treatments. The assurance treatment lead to higher preferences and willingness to pay for the wolf, compared to the fear treatment and the control, for several population sizes. On the other hand, the impact of the fear treatment was lower than expected and only significant for large populations of wolves, in excess of 50 specimen. Overall, the study suggests that emotional choices may represent a source of concern for the assessment of stable preferences. The impact of emotional choices is likely to be greater in situations where a wildlife-related topic is highly emphasized, positively or negatively, by social networks, mass media, and opinion leaders. When stated preferences towards wildlife are affected by the emotional state of fear due to contextual external stimuli, welfare analysis does not reflect stable individual preferences and may lead to sub-optimal conservation policies. Therefore, while more research is recommended for a more accurate assessment, it is advised to control the decision context during surveys for potential emotional choices.


2022 ◽  
Vol 4 ◽  
Author(s):  
David Orrell ◽  
Monireh Houshmand

This paper describes an approach to economics that is inspired by quantum computing, and is motivated by the need to develop a consistent quantum mathematical framework for economics. The traditional neoclassical approach assumes that rational utility-optimisers drive market prices to a stable equilibrium, subject to external perturbations or market failures. While this approach has been highly influential, it has come under increasing criticism following the financial crisis of 2007/8. The quantum approach, in contrast, is inherently probabilistic and dynamic. Decision-makers are described, not by a utility function, but by a propensity function which specifies the probability of transacting. We show how a number of cognitive phenomena such as preference reversal and the disjunction effect can be modelled by using a simple quantum circuit to generate an appropriate propensity function. Conversely, a general propensity function can be quantized, via an entropic force, to incorporate effects such as interference and entanglement that characterise human decision-making. Applications to some common problems and topics in economics and finance, including the use of quantum artificial intelligence, are discussed.


2022 ◽  
Author(s):  
Shaozhe Cheng ◽  
Ning Tang ◽  
Yang Zhao ◽  
Jifan Zhou ◽  
mowed shen ◽  
...  

It is an ancient insight that human actions are driven by desires. This insight inspired the formulation that a rational agent acts to maximize expected utility (MEU), which has been widely used in psychology for modeling theory of mind and in artificial intelligence (AI) for controlling machines’ actions. Yet, it's rather unclear how humans act coherently when their desires are complex and often conflicting with each other. Here we show desires do not directly control human actions. Instead, actions are regulated by an intention — a deliberate mental state that commits to a fixed future rather than taking the expected utilities of many futures evaluated by many desires. Our study reveals four behavioral signatures of human intention by demonstrating how human sequential decision-making deviates from the optimal policy based on MEU in a navigation task: “Disruption resistance” as the persistent pursuit of an original intention despite an unexpected change has made that intention suboptimal; “Ulysses-constraint of freedom” as the proactive constraint of one’s freedom by avoiding a path that could lead to many futures, similar to Ulysses’s self-binding to resist the temptation of the Siren’s song; “Enhanced legibility” as an active demonstration of intention by choosing a path whose destination can be promptly inferred by a third-party observer; “Temporal leap” as committing to a distant future even before reaching the proximal one. Our results showed how the philosophy of intention can lead to discoveries of human decision-making, which can also be empirically compared with AI algorithms. The findings showing that to define a theory of mind, intention should be highlighted as a distinctive mental state in between desires and actions, for quarantining conflicting desires from the execution of actions.


2022 ◽  
pp. 231-246
Author(s):  
Swati Bansal ◽  
Monica Agarwal ◽  
Deepak Bansal ◽  
Santhi Narayanan

Artificial intelligence is already here in all facets of work life. Its integration into human resources is a necessary process which has far-reaching benefits. It may have its challenges, but to survive in the current Industry 4.0 environment and prepare for the future Industry 5.0, organisations must penetrate AI into their HR systems. AI can benefit all the functions of HR, starting right from talent acquisition to onboarding and till off-boarding. The importance further increases, keeping in mind the needs and career aspirations of Generation Y and Z entering the workforce. Though employees have apprehensions of privacy and loss of jobs if implemented effectively, AI is the present and future. AI will not make people lose jobs; instead, it would require the HR people to upgrade their skills and spend their time in more strategic roles. In the end, it is the HR who will make the final decisions from the information that they get from the AI tools. A proper mix of human decision-making skills and AI would give organisations the right direction to move forward.


Queue ◽  
2021 ◽  
Vol 19 (6) ◽  
pp. 28-56
Author(s):  
Valerie Chen ◽  
Jeffrey Li ◽  
Joon Sik Kim ◽  
Gregory Plumb ◽  
Ameet Talwalkar

The emergence of machine learning as a society-changing technology in the past decade has triggered concerns about people's inability to understand the reasoning of increasingly complex models. The field of IML (interpretable machine learning) grew out of these concerns, with the goal of empowering various stakeholders to tackle use cases, such as building trust in models, performing model debugging, and generally informing real human decision-making.


2021 ◽  
Vol 13 (3) ◽  
Author(s):  
Roger Morbey ◽  
Gillian Smith ◽  
Isabel Oliver ◽  
Obaghe Edeghere ◽  
Iain Lake ◽  
...  

Surveillance systems need to be evaluated to understand what the system can or cannot detect. The measures commonly used to quantify detection capabilities are sensitivity, positive predictive value and timeliness. However, the practical application of these measures to multi-purpose syndromic surveillance services is complex. Specifically, it is very difficult to link definitive lists of what the service is intended to detect and what was detected. First, we discuss issues arising from a multi-purpose system, which is designed to detect a wide range of health threats, and where individual indicators, e.g. ‘fever’, are also multi-purpose. Secondly, we discuss different methods of defining what can be detected, including historical events and simulations. Finally, we consider the additional complexity of evaluating a service which incorporates human decision-making alongside an automated detection algorithm. Understanding the complexities involved in evaluating multi-purpose systems helps design appropriate methods to describe their detection capabilities.


2021 ◽  
pp. 002224372110700
Author(s):  
Gizem Yalcin ◽  
Sarah Lim ◽  
Stefano Puntoni ◽  
Stijn M. J. van Osselaer

Although companies increasingly are adopting algorithms for consumer-facing tasks (e.g., application evaluations), little research has compared consumers’ reactions to favorable decisions (e.g., acceptances) versus unfavorable decisions (e.g., rejections) about themselves that are made by an algorithm versus a human. Ten studies reveal that, in contrast to managers’ predictions, consumers react less positively when a favorable decision is made by an algorithmic (vs. a human) decision maker, whereas this difference is mitigated for an unfavorable decision. The effect is driven by distinct attribution processes: It is easier for consumers to internalize a favorable decision outcome that is rendered by a human (vs. an algorithm), while it is easy to externalize an unfavorable decision outcome regardless of the decision maker type. The authors conclude by advising managers on how to limit the likelihood of less positive reactions toward algorithmic (vs. human) acceptances.


Sign in / Sign up

Export Citation Format

Share Document