Concepts and tools of artificial intelligence for human decision making

1988 ◽  
Vol 68 (1-3) ◽  
pp. 217-236 ◽  
Author(s):  
Anna Vari ◽  
Janos Vecsenyi
2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


Author(s):  
Chris Reed

Using artificial intelligence (AI) technology to replace human decision-making will inevitably create new risks whose consequences are unforeseeable. This naturally leads to calls for regulation, but I argue that it is too early to attempt a general system of AI regulation. Instead, we should work incrementally within the existing legal and regulatory schemes which allocate responsibility, and therefore liability, to persons. Where AI clearly creates risks which current law and regulation cannot deal with adequately, then new regulation will be needed. But in most cases, the current system can work effectively if the producers of AI technology can provide sufficient transparency in explaining how AI decisions are made. Transparency ex post can often be achieved through retrospective analysis of the technology's operations, and will be sufficient if the main goal is to compensate victims of incorrect decisions. Ex ante transparency is more challenging, and can limit the use of some AI technologies such as neural networks. It should only be demanded by regulation where the AI presents risks to fundamental rights, or where society needs reassuring that the technology can safely be used. Masterly inactivity in regulation is likely to achieve a better long-term solution than a rush to regulate in ignorance. This article is part of a discussion meeting issue ‘The growing ubiquity of algorithms in society: implications, impacts and innovations'.


2022 ◽  
pp. 231-246
Author(s):  
Swati Bansal ◽  
Monica Agarwal ◽  
Deepak Bansal ◽  
Santhi Narayanan

Artificial intelligence is already here in all facets of work life. Its integration into human resources is a necessary process which has far-reaching benefits. It may have its challenges, but to survive in the current Industry 4.0 environment and prepare for the future Industry 5.0, organisations must penetrate AI into their HR systems. AI can benefit all the functions of HR, starting right from talent acquisition to onboarding and till off-boarding. The importance further increases, keeping in mind the needs and career aspirations of Generation Y and Z entering the workforce. Though employees have apprehensions of privacy and loss of jobs if implemented effectively, AI is the present and future. AI will not make people lose jobs; instead, it would require the HR people to upgrade their skills and spend their time in more strategic roles. In the end, it is the HR who will make the final decisions from the information that they get from the AI tools. A proper mix of human decision-making skills and AI would give organisations the right direction to move forward.


Author(s):  
M.P.L. Perera

Adaptive e-learning the aim is to fill the gap between the pupil and the educator by discussing the needs and skills of individual learners. Artificial intelligence strategies that have the potential to simulate human decision-making processes are important around adaptive e-Learning. This paper explores the Artificial techniques; Fuzzy Logic, Neural Networks, Bayesian Networks and Genetic Algorithms, highlighting their contributions to the notion of the adaptability in the sense of Adaptive E-learning. The implementation of Artificial Neural Networks to resolve problems in the current Adaptive e-learning frameworks have been established.


2020 ◽  
pp. 78-106
Author(s):  
George A. Khachatryan

This chapter describes the core ideas behind instruction modeling. A promising way to improve mathematics instruction is to import successful approaches from other countries; however, it is exceptionally difficult to do this, since instructional traditions are cultural and the volume of teaching expertise that needs to be transferred is vast. Computers offer a possible way to ease the barriers. Expert systems (invented c. 1970) are a type of artificial intelligence system that uses rules to mimic human decision-making. Following the pattern suggested by expert systems, an instruction modeler studies high-quality offline instruction and then designs computer programs that aim to recreate this instruction. Many important activities cannot be automated, and therefore instruction modeling is necessarily blended learning: some instruction takes place online, while other activities are led by classroom teachers. To illustrate these ideas, this chapter describes several instruction modeling programs created by Reasoning Mind. It also discusses Russian mathematics education, explaining why it is a successful instructional tradition and a suitable choice for instruction modeling.


2021 ◽  
Vol 3 (3) ◽  
pp. 740-770
Author(s):  
Samanta Knapič ◽  
Avleen Malhi ◽  
Rohit Saluja ◽  
Kary Främling

In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.


Author(s):  
Hesham Fouad ◽  
Ira S. Moskowitz ◽  
Derek Brock ◽  
Michael Scott

Sign in / Sign up

Export Citation Format

Share Document