Wikipedia Knowledge Graph for Explainable AI

Author(s):  
Md Kamruzzaman Sarker ◽  
Joshua Schwartz ◽  
Pascal Hitzler ◽  
Lu Zhou ◽  
Srikanth Nadella ◽  
...  
2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Emmanuel Bresso ◽  
Pierre Monnin ◽  
Cédric Bousquet ◽  
François-Elie Calvier ◽  
Ndeye-Coumba Ndiaye ◽  
...  

Abstract Background Adverse drug reactions (ADRs) are statistically characterized within randomized clinical trials and postmarketing pharmacovigilance, but their molecular mechanism remains unknown in most cases. This is true even for hepatic or skin toxicities, which are classically monitored during drug design. Aside from clinical trials, many elements of knowledge about drug ingredients are available in open-access knowledge graphs, such as their properties, interactions, or involvements in pathways. In addition, drug classifications that label drugs as either causative or not for several ADRs, have been established. Methods We propose in this paper to mine knowledge graphs for identifying biomolecular features that may enable automatically reproducing expert classifications that distinguish drugs causative or not for a given type of ADR. In an Explainable AI perspective, we explore simple classification techniques such as Decision Trees and Classification Rules because they provide human-readable models, which explain the classification itself, but may also provide elements of explanation for molecular mechanisms behind ADRs. In summary, (1) we mine a knowledge graph for features; (2) we train classifiers at distinguishing, on the basis of extracted features, drugs associated or not with two commonly monitored ADRs: drug-induced liver injuries (DILI) and severe cutaneous adverse reactions (SCAR); (3) we isolate features that are both efficient in reproducing expert classifications and interpretable by experts (i.e., Gene Ontology terms, drug targets, or pathway names); and (4) we manually evaluate in a mini-study how they may be explanatory. Results Extracted features reproduce with a good fidelity classifications of drugs causative or not for DILI and SCAR (Accuracy = 0.74 and 0.81, respectively). Experts fully agreed that 73% and 38% of the most discriminative features are possibly explanatory for DILI and SCAR, respectively; and partially agreed (2/3) for 90% and 77% of them. Conclusion Knowledge graphs provide sufficiently diverse features to enable simple and explainable models to distinguish between drugs that are causative or not for ADRs. In addition to explaining classifications, most discriminative features appear to be good candidates for investigating ADR mechanisms further.


Author(s):  
Enayat Rajabi ◽  
Kobra Etminani

The decisions derived from AI-based clinical decision support systems should be explainable and transparent so that the healthcare professionals can understand the rationale behind the predictions. To improve the explanations, knowledge graphs are a well-suited choice to be integrated into eXplainable AI. In this paper, we introduce a knowledge graph-based explainable framework for AI-based clinical decision support systems to increase their level of explainability.


2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Suzanna Schmeelk ◽  
Lixin Tao

Many organizations, to save costs, are movinheg to t Bring Your Own Mobile Device (BYOD) model and adopting applications built by third-parties at an unprecedented rate.  Our research examines software assurance methodologies specifically focusing on security analysis coverage of the program analysis for mobile malware detection, mitigation, and prevention.  This research focuses on secure software development of Android applications by developing knowledge graphs for threats reported by the Open Web Application Security Project (OWASP).  OWASP maintains lists of the top ten security threats to web and mobile applications.  We develop knowledge graphs based on the two most recent top ten threat years and show how the knowledge graph relationships can be discovered in mobile application source code.  We analyze 200+ healthcare applications from GitHub to gain an understanding of their software assurance of their developed software for one of the OWASP top ten moble threats, the threat of “Insecure Data Storage.”  We find that many of the applications are storing personally identifying information (PII) in potentially vulnerable places leaving users exposed to higher risks for the loss of their sensitive data.


2019 ◽  
Author(s):  
Jemmy Wiratama
Keyword(s):  

I'm an Science & Technology enthusiast. I still learn how to build a knowledge graph and how to write a paper.


2020 ◽  
Author(s):  
Markus Jaeger ◽  
Stephan Krügel ◽  
Dimitri Marinelli ◽  
Jochen Papenbrock ◽  
Peter Schwendner

Sign in / Sign up

Export Citation Format

Share Document