scholarly journals Controlling Safety of Artificial Intelligence-Based Systems in Healthcare

Author(s):  
Mohammad Reza Davahli ◽  
Waldemar Karwowski ◽  
Krzysztof Fiok ◽  
Thomas T.H. Wan ◽  
Hamid R Parsaei

In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to address the AI black-box mystery in the healthcare industry. The main objective was to propose safety guidelines for implementing AI black-box models to reduce the risk of potential healthcare-related incidents and accidents. The system was developed by adopting the multi-attribute value model approach (MAVT), which comprises four symmetrical parts: extracting attributes, generating weights for the attributes, developing a rating scale, and finalizing the system. On the basis of the MAVT approach, three layers of attributes were created. The first level contained 6 key dimensions, the second level included 14 attributes, and the third level comprised 78 attributes. The key first level dimensions of the SCS included safety policies, incentives for clinicians, clinician and patient training, communication and interaction, planning of actions, and control of such actions. The proposed system may provide a basis for detecting AI utilization risks, preventing incidents from occurring, and developing emergency plans for AI-related risks. This approach could also guide and control the implementation of AI systems in the healthcare industry.

Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 102
Author(s):  
Mohammad Reza Davahli ◽  
Waldemar Karwowski ◽  
Krzysztof Fiok ◽  
Thomas Wan ◽  
Hamid R. Parsaei

In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to address the AI black-box mystery in the healthcare industry. The main objective was to propose safety guidelines for implementing AI black-box models to reduce the risk of potential healthcare-related incidents and accidents. The system was developed by adopting the multi-attribute value model approach (MAVT), which comprises four symmetrical parts: extracting attributes, generating weights for the attributes, developing a rating scale, and finalizing the system. On the basis of the MAVT approach, three layers of attributes were created. The first level contained six key dimensions, the second level included 14 attributes, and the third level comprised 78 attributes. The key first level dimensions of the SCS included safety policies, incentives for clinicians, clinician and patient training, communication and interaction, planning of actions, and control of such actions. The proposed system may provide a basis for detecting AI utilization risks, preventing incidents from occurring, and developing emergency plans for AI-related risks. This approach could also guide and control the implementation of AI systems in the healthcare industry.


Author(s):  
Mohammad Reza Davahli ◽  
Waldemar Karwowski ◽  
Krzysztof Fiok ◽  
Thomas T.H. Wan ◽  
Hamid R Parsaei

In response to the need to address the safety challenges in the use of artificial intelligence (AI), this research aimed to develop a framework for a safety controlling system (SCS) to solve the AI black-box mystery in the healthcare industry. The system was developed by adopting the multi-attribute value model approach (MAVT), which comprises four parts: extracting attributes, generating weights for the attributes, developing a rating scale, and finalizing the system. On the basis of the MAVT approach, three layers of attributes were created. The first level contained 6 key dimensions, the second level included 14 attributes, and the third level comprised 78 attributes. The key first-level dimensions of the SCS included safety policies, incentives for clinicians, clinician and patient training, communication and interaction, planning of actions, and control of such actions. The proposed system may provide a basis for detecting AI utilization risks, preventing incidents from occurring and developing emergency plans for AI-related risks. This approach could also guide and control the implementation of AI systems in the healthcare industry.


Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


Author(s):  
Evren Dağlarli

The explainable artificial intelligence (xAI) is one of the interesting issues that has emerged recently. Many researchers are trying to deal with the subject with different dimensions and interesting results that have come out. However, we are still at the beginning of the way to understand these types of models. The forthcoming years are expected to be years in which the openness of deep learning models is discussed. In classical artificial intelligence approaches, we frequently encounter deep learning methods available today. These deep learning methods can yield highly effective results according to the data set size, data set quality, the methods used in feature extraction, the hyper parameter set used in deep learning models, the activation functions, and the optimization algorithms. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network-based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. This is an important open point in artificial neural networks and deep learning models. For these reasons, it is necessary to make serious efforts on the explainability and interpretability of black box models.


2021 ◽  
Vol 7 ◽  
pp. e479
Author(s):  
Elvio Amparore ◽  
Alan Perotti ◽  
Paolo Bajardi

The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in practice. Moreover, explanations are typically used only to inspect black-box models, and the proactive use of explanations as a decision support is generally overlooked. Among the many approaches to XAI, a widely adopted paradigm is Local Linear Explanations—with LIME and SHAP emerging as state-of-the-art methods. We show that these methods are plagued by many defects including unstable explanations, divergence of actual implementations from the promised theoretical properties, and explanations for the wrong label. This highlights the need to have standard and unbiased evaluation procedures for Local Linear Explanations in the XAI field. In this paper we address the problem of identifying a clear and unambiguous set of metrics for the evaluation of Local Linear Explanations. This set includes both existing and novel metrics defined specifically for this class of explanations. All metrics have been included in an open Python framework, named LEAF. The purpose of LEAF is to provide a reference for end users to evaluate explanations in a standardised and unbiased way, and to guide researchers towards developing improved explainable techniques.


Author(s):  
Mihail Grif ◽  
◽  
Baurzhan Belgibaev ◽  
Amantur Umarov ◽  
◽  
...  

The greenhouse is a closed-type agroecological system in which energy processes are strictly determined by the technological process of growing plants, taking into account the influence of the environment. As you know, greenhouse models are divided into two types: white box models and black box models. The well-known model of the “Soil-Plant-Atmosphere” system belongs to the first type, built on the physical principles of thermo-, hydro- and gas dynamics. They consist of several complex differential equations that use numerous coefficients and parameters that are known in advance. Such models are cumbersome and require large computational resources and time-consuming. The proposed model of the system “Plant-Environment-Situation-Management” is a practical analogue of the well-known model “Soil-Plant-Atmosphere”. The main difference of this model is that it refers to a black box model, which is an approximation of the observed processes and allows you to describe processes based on experimental data. On the basis of the “Plant-Environment-Situation-Management” model, the software and hardware system “Smart Greenhouse” was developed, which is a human-machine system with a rational separation of the functions of preparation (computer) and decision-making (Man). It allows you to control and control the growth and development of the plant during the growing season, taking into account the influence of environmental conditions. The system is implemented and used in the greenhouse of the al-Farabi Kazakh National University.


2019 ◽  
Vol 29 (Supplement_4) ◽  
Author(s):  
S Ram

Abstract With rapid developments in big data technology and the prevalence of large-scale datasets from diverse sources, the healthcare predictive analytics (HPA) field is witnessing a dramatic surge in interest. In healthcare, it is not only important to provide accurate predictions, but also critical to provide reliable explanations to the underlying black-box models making the predictions. Such explanations can play a crucial role in not only supporting clinical decision-making but also facilitating user engagement and patient safety. If users and decision makers do not have faith in the HPA model, it is highly likely that they will reject its use. Furthermore, it is extremely risky to blindly accept and apply the results derived from black-box models, which might lead to undesirable consequences or life-threatening outcomes in domains with high stakes such as healthcare. As machine learning and artificial intelligence systems are becoming more capable and ubiquitous, explainable artificial intelligence and machine learning interpretability are garnering significant attention among practitioners and researchers. The introduction of policies such as the General Data Protection Regulation (GDPR), has amplified the need for ensuring human interpretability of prediction models. In this talk I will discuss methods and applications for developing local as well as global explanations from machine learning and the value they can provide for healthcare prediction.


2021 ◽  
Vol 9 ◽  
Author(s):  
Eduardo Eiji Maeda ◽  
Päivi Haapasaari ◽  
Inari Helle ◽  
Annukka Lehikoinen ◽  
Alexey Voinov ◽  
...  

Modeling is essential for modern science, and science-based policies are directly affected by the reliability of model outputs. Artificial intelligence has improved the accuracy and capability of model simulations, but often at the expense of a rational understanding of the systems involved. The lack of transparency in black box models, artificial intelligence based ones among them, can potentially affect the trust in science driven policy making. Here, we suggest that a broader discussion is needed to address the implications of black box approaches on the reliability of scientific advice used for policy making. We argue that participatory methods can bridge the gap between increasingly complex scientific methods and the people affected by their interpretations


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
M Afnan ◽  
Y Liu ◽  
V Conitzer ◽  
C Rudin ◽  
A Mishra ◽  
...  

Abstract Study question What are the epistemic and ethical considerations of clinically implementing Artificial Intelligence (AI) algorithms in embryo selection? Summary answer AI embryo selection algorithms used to date are “black-box” models with significant epistemic and ethical issues, and there are no trials assessing their clinical effectiveness. What is known already The innovation of time-lapse imaging offers the potential to generate vast quantities of data for embryo assessment. Computer Vision allows image data to be analysed using algorithms developed via machine learning which learn and adapt as they are exposed to more data. Most algorithms are developed using neural networks and are uninterpretable (or “black box”). Uninterpretable models are either too complicated to understand or proprietary, in which case comprehension is impossible for outsiders. In the IVF context, these outsiders include doctors, embryologists and patients, which raises ethical questions for its use in embryo selection. Study design, size, duration We performed a scoping review of articles evaluating AI for embryo selection in IVF. We considered the epistemic and ethical implications of current approaches. Participants/materials, setting, methods We searched Medline, Embase, ClinicalTrials.gov and the EU Clinical Trials Register for full text papers evaluating AI for embryo selection using the following key words: artificial intelligence* OR AI OR neural network* OR machine learning OR support vector machine OR automatic classification AND IVF OR in vitro fertilisation OR embryo*, as well as relevant MeSH and Emtree terms for Medline and Embase respectively. Main results and the role of chance We found no trials evaluating clinical effectiveness either published or registered. We found efficacy studies which looked at 2 types of outcomes – accuracy for predicting pregnancy or live birth and agreement with embryologist evaluation. Some algorithms were shown to broadly differentiate well between “good-” and “poor-” quality embryos but not between embryos of similar quality, which is the clinical need. Almost universally, the AI models were opaque (“black box”) in that at least some part of the process was uninterpretable. “Black box” models are problematic for epistemic and ethical reasons. Epistemic concerns include information asymmetries between algorithm developers and doctors, embryologists and patients; the risk of biased prediction caused by known and/or unknown confounders during the training process; difficulties in real-time error checking due to limited interpretability; the economics of buying into commercial proprietary models, brittle to variation in the treatment process; and an overall difficulty troubleshooting. Ethical pitfalls include the risk of misrepresenting patient values; concern for the health and well-being of future children; the risk of disvaluing disability; possible societal implications; and a responsibility gap, in the event of adverse events. Limitations, reasons for caution Our search was limited to the two main medical research databases. Although we checked article references for more publications, we were less likely to identify studies that were not indexed in Medline or Embase, especially if they were not cited in studies identified in our search. Wider implications of the findings It is premature to implement AI for embryo selection outside of a clinical trial. AI for embryo selection is potentially useful, but must be done carefully and transparently, as the epistemic and ethical issues are significant. We advocate for the use of interpretable AI models to overcome these issues. Trial registration number not applicable


AI and Ethics ◽  
2021 ◽  
Author(s):  
Christian Herzog

AbstractThis Comment explores the implications of a lack of tools that facilitate an explicable utilization of epistemologically richer, but also more involved white-box approaches in AI. In contrast, advances in explainable artificial intelligence for black-box approaches have led to the availability of semi-standardized and attractive toolchains that offer a seemingly competitive edge over inherently interpretable white-box models in terms of intelligibility towards users. Consequently, there is a need for research on efficient tools for rendering interpretable white-box approaches in AI explicable to facilitate responsible use.


Sign in / Sign up

Export Citation Format

Share Document