scholarly journals Intelligent Decision Assistance Versus Automated Decision-Making: Enhancing Knowledge Workers Through Explainable Artificial Intelligence

2022 ◽  
Author(s):  
Max Schemmer ◽  
Niklas Kühl ◽  
Gerhard Satzger
Author(s):  
Wael Mohammad Alenazy

The integration of internet of things, artificial intelligence, and blockchain enabled the monitoring of structural health with unattended and automated means. Remote monitoring mandates intelligent automated decision-making capability, which is still absent in present solutions. The proposed solution in this chapter contemplates the architecture of smart sensors, customized for individual structures, to regulate the monitoring of structural health through stress, strain, and bolted joints looseness. Long range sensors are deployed for transmitting the messages a longer distance than existing techniques. From the simulated results, different sensors record the monitoring information and transmit to the blockchain platform in terms of pressure points, temperature, pre-tension force, and the architecture deems the criticality of transactions. Blockchain platform will also be responsible for storage and accessibility of information from a decentralized medium, automation, and security.


AI & Society ◽  
2020 ◽  
Vol 35 (3) ◽  
pp. 611-623 ◽  
Author(s):  
Theo Araujo ◽  
Natali Helberger ◽  
Sanne Kruikemeier ◽  
Claes H. de Vreese

Robotica ◽  
1987 ◽  
Vol 5 (2) ◽  
pp. 99-110 ◽  
Author(s):  
Igor Aleksander

SUMMARYThis paper describes the principles of the advanced programming techniques often dubbed Artificial Intelligence involved in decision making as may be of some value in matters related to production engineering. Automated decision making in the context of production can adopt many aspects. At the most obvious level, a robot may have to plan a sequence of actions on the basis of signals obtained from changing conditions in its environment. These signals may, indeed, be quite complex, for example the input of visual information from a television camera.At another level, automated planning may be required to schedule the entire work cycle of a plant that includes many robots as well as other types of automated machinery. The often-quoted dark factory is an example of this, where not only some of the operations (such as welding) are done by robots, but also the transport of part-completed assemblies is automatically scheduled as a set of actions for autonomic transporters and cranes. It is common practice for this activity to be preprogrammed to the greatest detail. Automated decision making is aimed at adding flexibility to the process so that it can absolve the system designer from having to forsee every eventuality at the design stage.Frequent reference is made in this context to artificial intelligence (AI), knowledge-based and expert systems. Although these topics are more readily associated with computer science, it is the automated factory, in general, and the robot, in particular, that will benefit from success in these fields. In this part of the paper we try to sharpen up this perspective, while in part II we aim to discuss the history of artificial intelligence in this context. In part III we discuss the industrial prospects for the field.


2015 ◽  
Vol 773-774 ◽  
pp. 154-157 ◽  
Author(s):  
Muhammad Firdaus Rosli ◽  
Lim Meng Hee ◽  
M. Salman Leong

Machines are the heart of most industries. By ensuring the health of machines, one could easily increase the company revenue and eliminates any safety threat related to machinery catastrophic failures. In condition monitoring (CM), questions often arise during decision making time whether the machine is still safe to run or not? Traditional CM approach depends heavily on human interpretation of results whereby decision is made solely based on the individual experience and knowledge about the machines. The advent of artificial intelligence (AI) and automated ways for decision making in CM provides a more objective and unbiased approach for CM industry and has become a topic of interest in the recent years. This paper reviews the techniques used for automated decision making in CM with emphasis given on Dempster-Shafer (D-S) evident theory and other basic probability assignment (BPA) techniques such as support vector machine (SVM) and etc.


2021 ◽  
Vol 36 ◽  
Author(s):  
Alexandros Vassiliades ◽  
Nick Bassiliades ◽  
Theodore Patkos

Abstract Argumentation and eXplainable Artificial Intelligence (XAI) are closely related, as in the recent years, Argumentation has been used for providing Explainability to AI. Argumentation can show step by step how an AI System reaches a decision; it can provide reasoning over uncertainty and can find solutions when conflicting information is faced. In this survey, we elaborate over the topics of Argumentation and XAI combined, by reviewing all the important methods and studies, as well as implementations that use Argumentation to provide Explainability in AI. More specifically, we show how Argumentation can enable Explainability for solving various types of problems in decision-making, justification of an opinion, and dialogues. Subsequently, we elaborate on how Argumentation can help in constructing explainable systems in various applications domains, such as in Medical Informatics, Law, the Semantic Web, Security, Robotics, and some general purpose systems. Finally, we present approaches that combine Machine Learning and Argumentation Theory, toward more interpretable predictive models.


2021 ◽  
Author(s):  
Nicolas Scharowski ◽  
Florian Brühlmann

In explainable artificial intelligence (XAI) research, explainability is widely regarded as crucial for user trust in artificial intelligence (AI). However, empirical investigations of this assumption are still lacking. There are several proposals as to how explainability might be achieved and it is an ongoing debate what ramifications explanations actually have on humans. In our work-in-progress we explored two posthoc explanation approaches presented in natural language as a means for explainable AI. We examined the effects of human-centered explanations on trust behavior in a financial decision-making experiment (N = 387), captured by weight of advice (WOA). Results showed that AI explanations lead to higher trust behavior if participants were advised to decrease an initial price estimate. However, explanations had no effect if the AI recommended to increase the initial price estimate. We argue that these differences in trust behavior may be caused by cognitive biases and heuristics that people retain in their decision-making processes involving AI. So far, XAI has primarily focused on biased data and prejudice due to incorrect assumptions in the machine learning process. The implications of potential biases and heuristics that humans exhibit when being presented an explanation by AI have received little attention in the current XAI debate. Both researchers and practitioners need to be aware of such human biases and heuristics in order to develop truly human-centered AI.


2021 ◽  
Vol 3 (3) ◽  
pp. 740-770
Author(s):  
Samanta Knapič ◽  
Avleen Malhi ◽  
Rohit Saluja ◽  
Kary Främling

In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.


Sign in / Sign up

Export Citation Format

Share Document