scholarly journals Argumentation and explainable artificial intelligence: a survey

2021 ◽  
Vol 36 ◽  
Author(s):  
Alexandros Vassiliades ◽  
Nick Bassiliades ◽  
Theodore Patkos

Abstract Argumentation and eXplainable Artificial Intelligence (XAI) are closely related, as in the recent years, Argumentation has been used for providing Explainability to AI. Argumentation can show step by step how an AI System reaches a decision; it can provide reasoning over uncertainty and can find solutions when conflicting information is faced. In this survey, we elaborate over the topics of Argumentation and XAI combined, by reviewing all the important methods and studies, as well as implementations that use Argumentation to provide Explainability in AI. More specifically, we show how Argumentation can enable Explainability for solving various types of problems in decision-making, justification of an opinion, and dialogues. Subsequently, we elaborate on how Argumentation can help in constructing explainable systems in various applications domains, such as in Medical Informatics, Law, the Semantic Web, Security, Robotics, and some general purpose systems. Finally, we present approaches that combine Machine Learning and Argumentation Theory, toward more interpretable predictive models.

2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


2021 ◽  
Vol 22 (6) ◽  
pp. 626-634
Author(s):  
Saskya Byerly ◽  
Lydia R. Maurer ◽  
Alejandro Mantero ◽  
Leon Naar ◽  
Gary An ◽  
...  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Pooya Tabesh

Purpose While it is evident that the introduction of machine learning and the availability of big data have revolutionized various organizational operations and processes, existing academic and practitioner research within decision process literature has mostly ignored the nuances of these influences on human decision-making. Building on existing research in this area, this paper aims to define these concepts from a decision-making perspective and elaborates on the influences of these emerging technologies on human analytical and intuitive decision-making processes. Design/methodology/approach The authors first provide a holistic understanding of important drivers of digital transformation. The authors then conceptualize the impact that analytics tools built on artificial intelligence (AI) and big data have on intuitive and analytical human decision processes in organizations. Findings The authors discuss similarities and differences between machine learning and two human decision processes, namely, analysis and intuition. While it is difficult to jump to any conclusions about the future of machine learning, human decision-makers seem to continue to monopolize the majority of intuitive decision tasks, which will help them keep the upper hand (vis-à-vis machines), at least in the near future. Research limitations/implications The work contributes to research on rational (analytical) and intuitive processes of decision-making at the individual, group and organization levels by theorizing about the way these processes are influenced by advanced AI algorithms such as machine learning. Practical implications Decisions are building blocks of organizational success. Therefore, a better understanding of the way human decision processes can be impacted by advanced technologies will prepare managers to better use these technologies and make better decisions. By clarifying the boundaries/overlaps among concepts such as AI, machine learning and big data, the authors contribute to their successful adoption by business practitioners. Social implications The work suggests that human decision-makers will not be replaced by machines if they continue to invest in what they do best: critical thinking, intuitive analysis and creative problem-solving. Originality/value The work elaborates on important drivers of digital transformation from a decision-making perspective and discusses their practical implications for managers.


2021 ◽  
Author(s):  
J. Eric T. Taylor ◽  
Graham Taylor

Artificial intelligence powered by deep neural networks has reached a levelof complexity where it can be difficult or impossible to express how a modelmakes its decisions. This black-box problem is especially concerning when themodel makes decisions with consequences for human well-being. In response,an emerging field called explainable artificial intelligence (XAI) aims to increasethe interpretability, fairness, and transparency of machine learning. In thispaper, we describe how cognitive psychologists can make contributions to XAI.The human mind is also a black box, and cognitive psychologists have overone hundred and fifty years of experience modeling it through experimentation.We ought to translate the methods and rigour of cognitive psychology to thestudy of artificial black boxes in the service of explainability. We provide areview of XAI for psychologists, arguing that current methods possess a blindspot that can be complemented by the experimental cognitive tradition. Wealso provide a framework for research in XAI, highlight exemplary cases ofexperimentation within XAI inspired by psychological science, and provide atutorial on experimenting with machines. We end by noting the advantages ofan experimental approach and invite other psychologists to conduct research inthis exciting new field.


Author(s):  
Shivangi Ruhela ◽  
Pragati Chaudhary ◽  
Rishija Shrivas ◽  
Deepti Chopra

Artificial Intelligence(AI) and Internet of Things(IoT) are popular domains in Computer Science. AIoT converges AI and IoT, thereby applying AI into IoT. When ‘things’ are programmed and connected to the Internet, IoT comes into place. But when these IoT systems, can analyze data and have decision-making potential without human intervention, AIoT is achieved. AI powers IoT through Decision-Making and Machine Learning, IoT powers AI through data exchange and connectivity. With the AI’s brain and IoT’s body, the systems can have shot-up efficiency, performance and learning from user interactions. Some studies show that, by 2022, AIoT devices such as drones to save rainforests or fully automated cars, would be ruling the computer industries. The paper discusses AIoT at a greater depth, focuses on few case studies of AIoT for better understanding on practical levels, and lastly, proposes an idea for a model which suggests food through emotion analysis.


2021 ◽  
Author(s):  
Yew Kee Wong

Deep learning is a type of machine learning that trains a computer to perform human-like tasks, such as recognizing speech, identifying images or making predictions. Instead of organizing data to run through predefined equations, deep learning sets up basic parameters about the data and trains the computer to learn on its own by recognizing patterns using many layers of processing. This paper aims to illustrate some of the different deep learning algorithms and methods which can be applied to artificial intelligence analysis, as well as the opportunities provided by the application in various decision making domains.


Author(s):  
Viktor Elliot ◽  
Mari Paananen ◽  
Miroslaw Staron

We propose an exercise with the purpose of providing a basic understanding of key concepts within AI and extending the understanding of AI beyond mathematics. The exercise allows participants to carry out analysis based on accounting data using visualization tools as well as to develop their own machine learning algorithms that can mimic their decisions. Finally, we also problematize the use of AI in decision-making, with such aspects as biases in data and/or ethical concerns.


Author(s):  
Adrian Zuckerman

Computer-operated systems are increasingly used for decision-making in public administration and private enterprise. Activities that were reserved to humans because they required decision-making in varied and unpredictable circumstances may now be performed by artificial intelligence (AI). Machine learning is developing at such a pace that it is conceivable that algorithm-operated systems may be able to provide litigation services and even adjudication. Supplanting lawyers and judges by AI would have serious implications beyond the loss of jobs. AI lawyers and AI judges would change the adversarial system beyond recognition by reducing adjudication into one machine operation, putting an end to the visibility of court process, and eliminating the physical presence of the court. Court legitimacy would be undermined because AI adjudication would not be able to reflect human psychology; emotions, aspirations, beliefs or moral sensibility.


Sign in / Sign up

Export Citation Format

Share Document