Explainable health prediction from facial features with transfer learning

2021 ◽  
pp. 1-13
Author(s):  
Tee Connie ◽  
Yee Fan Tan ◽  
Michael Kah Ong Goh ◽  
Hock Woon Hon ◽  
Zulaikha Kadim ◽  
...  

In the recent years, Artificial Intelligence (AI) has been widely deployed in the healthcare industry. The new AI technology enables efficient and personalized healthcare systems for the public. In this paper, transfer learning with pre-trained VGGFace model is applied to identify sick symptoms based on the facial features of a person. As the deep learning model’s operation is unknown for making a decision, this paper investigates the use of Explainable AI (XAI) techniques for soliciting explanations for the predictions made by the model. Various XAI techniques including Integrated Gradient, Explainable region-based AI (XRAI) and Local Interpretable Model-Agnostic Explanations (LIME) are studied. XAI is crucial to increase the model’s transparency and reliability for practical deployment. Experimental results demonstrate that the attribution method can give proper explanations for the decisions made by highlighting important attributes in the images. The facial features that account for positive and negative classes predictions are highlighted appropriately for effective visualization. XAI can help to increase accountability and trustworthiness of the healthcare system as it provides insights for understanding how a conclusion is derived from the AI model.

2021 ◽  
Author(s):  
Gaurav Chachra ◽  
Qingkai Kong ◽  
Jim Huang ◽  
Srujay Korlakunta ◽  
Jennifer Grannen ◽  
...  

Abstract After significant earthquakes, we can see images posted on social media platforms by individuals and media agencies owing to the mass usage of smartphones these days. These images can be utilized to provide information about the shaking damage in the earthquake region both to the public and research community, and potentially to guide rescue work. This paper presents an automated way to extract the damaged building images after earthquakes from social media platforms such as Twitter and thus identify the particular user posts containing such images. Using transfer learning and ~6500 manually labelled images, we trained a deep learning model to recognize images with damaged buildings in the scene. The trained model achieved good performance when tested on newly acquired images of earthquakes at different locations and ran in near real-time on Twitter feed after the 2020 M7.0 earthquake in Turkey. Furthermore, to better understand how the model makes decisions, we also implemented the Grad-CAM method to visualize the important locations on the images that facilitate the decision.


Smart Cities ◽  
2020 ◽  
Vol 3 (4) ◽  
pp. 1353-1382
Author(s):  
Dhavalkumar Thakker ◽  
Bhupesh Kumar Mishra ◽  
Amr Abdullatif ◽  
Suvodeep Mazumdar ◽  
Sydney Simpson

Traditional Artificial Intelligence (AI) technologies used in developing smart cities solutions, Machine Learning (ML) and recently Deep Learning (DL), rely more on utilising best representative training datasets and features engineering and less on the available domain expertise. We argue that such an approach to solution development makes the outcome of solutions less explainable, i.e., it is often not possible to explain the results of the model. There is a growing concern among policymakers in cities with this lack of explainability of AI solutions, and this is considered a major hindrance in the wider acceptability and trust in such AI-based solutions. In this work, we survey the concept of ‘explainable deep learning’ as a subset of the ‘explainable AI’ problem and propose a new solution using Semantic Web technologies, demonstrated with a smart cities flood monitoring application in the context of a European Commission-funded project. Monitoring of gullies and drainage in crucial geographical areas susceptible to flooding issues is an important aspect of any flood monitoring solution. Typical solutions for this problem involve the use of cameras to capture images showing the affected areas in real-time with different objects such as leaves, plastic bottles etc., and building a DL-based classifier to detect such objects and classify blockages based on the presence and coverage of these objects in the images. In this work, we uniquely propose an Explainable AI solution using DL and Semantic Web technologies to build a hybrid classifier. In this hybrid classifier, the DL component detects object presence and coverage level and semantic rules designed with close consultation with experts carry out the classification. By using the expert knowledge in the flooding context, our hybrid classifier provides the flexibility on categorising the image using objects and their coverage relationships. The experimental results demonstrated with a real-world use case showed that this hybrid approach of image classification has on average 11% improvement (F-Measure) in image classification performance compared to DL-only classifier. It also has the distinct advantage of integrating experts’ knowledge on defining the decision-making rules to represent the complex circumstances and using such knowledge to explain the results.


Author(s):  
Jai Galliott ◽  
Jason Scholz

This chapter addresses the military promise of artificial intelligence (AI), which is increasing along with advances in deep learning, neural networks, and robotics. The influence of AI will be felt across the full spectrum of armed conflict—from intelligence, surveillance, and reconnaissance through to the offensive and defensive employment of lethal force. This is to say that AI is less of a weapon than it is a military enabler, and yet the public still liken the notion of AI in the military context with killer robots, arising from the fears made public by numerous academic, business, and government leaders about the existential risk posed by an approaching singularity and the belief that AI could trigger the next world war. The chapter then considers what constitutes militarized “artificial intelligence”; the justifications for employing AI considering the limits of deep learning and the human role in alleged “black boxes”; the wider moral advantages, disadvantages, and risks of using AI in the military domain; and the potential implications for the way in which the armed forces plan, train, and fight. In doing so, it advances the concept of ethical AI as that which yields humanitarian benefits and differentiates between minimally and maximally just versions of said AI


Subject Prospect for artificial intelligence applications. Significance Artificial intelligence (AI) technologies, particularly those using 'deep learning', have in the past five years helped to automate many tasks previously outside the capabilities of computers. There are signs that the feverish pace of progress seen recently is slowing. Impacts Western legislation will make companies responsible for preventing decisions based on biased AI. Advances in 'explainable AI' will be rapid. China will be a major research player in AI technologies, alongside the United States, Japan and Europe.


2021 ◽  
Vol 11 (11) ◽  
pp. 1213
Author(s):  
Morteza Esmaeili ◽  
Riyas Vettukattil ◽  
Hasan Banitalebi ◽  
Nina R. Krogh ◽  
Jonn Terje Geitung

Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.


Author(s):  
Miroslav M. Bojović ◽  
Veljko Milutinović ◽  
Dragan Bojić ◽  
Nenad Korolija

Contemporary healthcare systems face growing demand for their services, rising costs, and a workforce. Artificial intelligence has the potential to transform how care is delivered and to help meet the challenges. Recent healthcare systems have been focused on using knowledge management and AI. The proposed solution is to reach explainable and causal AI by combining the benefits of the accuracy of deep-learning algorithms with visibility on the factors that are important to the algorithm's conclusion in a way that is accessible and understandable to physicians. Therefore, the authors propose AI approach in which the encoded clinical guidelines and protocols provide a starting point augmented by models that learn from data. The new structure of electronic health records that connects data from wearables and genomics data and innovative extensible big data architecture appropriate for this AI concept is proposed. Consequently, the proposed technology may drastically decrease the need for expensive software and hopefully eliminates the need to do diagnostics in expensive institutions.


2021 ◽  
Vol 309 ◽  
pp. 01167
Author(s):  
G. Ramesh ◽  
J. Praveen

An electric vehicle with autonomous driving is a possibility provided technology innovations in multi-disciplinary approach. Electric vehicles leverage environmental conditions and are much desired in the contemporary world. Another great possibility is to strive for making the vehicle to drive itself (autonomous driving) provided instructions. When the two are combined, it leads to a different dimension of environmental safety and technology driven driving that has many pros and cons as well. It is still in its infancy and there is much research to be carried out. In this context, this paper is aimed at building an Artificial Intelligence (AI) framework that has dual goal of “monitoring and regulating power usage” and facilitating autonomous driving with technology-driven and real time knowledge required. A methodology is proposed with multiple deep learning methods. For instance, deep learning is used for localization of vehicle, path planning at high level and path planning for low level. Apart from this, there is reinforcement learning and transfer learning to speed up the process of gaining real time business intelligence. To facilitate real time knowledge discovery from given scenarios, both edge and cloud resources are appropriately exploited to benefit the vehicle as driving safety is given paramount importance. There is power management module where modular Recurrent Neural Network is used. Another module known as speed control is used to have real time control over the speed of the vehicle. The usage of AI framework makes the electronic and autonomous vehicles realize unprecedented possibilities in power management and safe autonomous driving. Key words: Artificial Intelligence Autonomous Driving Recurrent Neural Network Transfer Learning


2021 ◽  
Vol 6 (22) ◽  
pp. 36-50
Author(s):  
Ali Hassan ◽  
Riza Sulaiman ◽  
Mansoor Abdullateef Abdulgabber ◽  
Hasan Kahtan

Recent advances in artificial intelligence, particularly in the field of machine learning (ML), have shown that these models can be incredibly successful, producing encouraging results and leading to diverse applications. Despite the promise of artificial intelligence, without transparency of machine learning models, it is difficult for stakeholders to trust the results of such models, which can hinder successful adoption. This concern has sparked scientific interest and led to the development of transparency-supporting algorithms. Although studies have raised awareness of the need for explainable AI, the question of how to meet real users' needs for understanding AI remains unresolved. This study provides a review of the literature on human-centric Machine Learning and new approaches to user-centric explanations for deep learning models. We highlight the challenges and opportunities facing this area of research. The goal is for this review to serve as a resource for both researchers and practitioners. The study found that one of the most difficult aspects of implementing machine learning models is gaining the trust of end-users.


2021 ◽  
Author(s):  
◽  
Bryce J. Murray

The recent resurgence of Artificial Intelligence (AI), specifically in the context of applications like healthcare, security and defense, IoT, and other areas that have a big impact on human life, has led to a demand for eXplainable AI (XAI). The production of explanations is argued to be a key aspect of achieving goals like trustworthiness and transparent versus opaque AI. XAI is also of fundamental academic interest with respect to helping us identifying weaknesses in the pursuit of making better AI. Herein, I focus on one piece of the AI puzzle, information fusion. In this work, I propose XAI fusion indices, linguistic summaries (aka textual explanations) of these indices, and local explanations for the fuzzy integral. However, a limitation of these indices is its tailored to highly educated fusion experts, and it is not clear what to do with these explanations. Herein, I extend the introduced indices to actionable explanations, which are demonstrated in the context of two case studies; multi-source fusion and deep learning for remote sensing. This work ultimately shows what XAI for fusion is and how to create actionable insights.


2020 ◽  
Vol 9 (7) ◽  
pp. 2206
Author(s):  
Teen-Hang Meen ◽  
Yusuke Matsumoto ◽  
Kuan-Han Lee

Recently, due to the advancement of network technology, big data and artificial intelligence, the healthcare industry has undergone many sector-wide changes. Medical care has not only changed from passive and hospital-centric to preventative and personalized, but also from disease-centric to health-centric. Healthcare systems and basic medical research are becoming more intelligent and being implemented in biomedical engineering. This Special Issue on “Clinical Medicine for Healthcare and Sustainability” selected 30 excellent papers from 160 papers presented in IEEE ECBIOS 2019 on the topic of clinical medicine for healthcare and sustainability. Our purpose is to encourage scientists to propose their experiments and theoretical researches to facilitate the scientific prediction and influential assessment of global change and development.


Sign in / Sign up

Export Citation Format

Share Document