scholarly journals Reasoning on Knowledge Graphs with Debate Dynamics

2020 ◽  
Vol 34 (04) ◽  
pp. 4123-4131
Author(s):  
Marcel Hildebrandt ◽  
Jorge Andres Quintero Serna ◽  
Yunpu Ma ◽  
Martin Ringsquandl ◽  
Mitchell Joblin ◽  
...  

We propose a novel method for automatic reasoning on knowledge graphs based on debate dynamics. The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents which extract arguments – paths in the knowledge graph – with the goal to promote the fact being true (thesis) or the fact being false (antithesis), respectively. Based on these arguments, a binary classifier, called the judge, decides whether the fact is true or false. The two agents can be considered as sparse, adversarial feature generators that present interpretable evidence for either the thesis or the antithesis. In contrast to other black-box methods, the arguments allow users to get an understanding of the decision of the judge. Since the focus of this work is to create an explainable method that maintains a competitive predictive accuracy, we benchmark our method on the triple classification and link prediction task. Thereby, we find that our method outperforms several baselines on the benchmark datasets FB15k-237, WN18RR, and Hetionet. We also conduct a survey and find that the extracted arguments are informative for users.

2020 ◽  
Vol 34 (03) ◽  
pp. 3065-3072 ◽  
Author(s):  
Zhanqiu Zhang ◽  
Jianyu Cai ◽  
Yongdong Zhang ◽  
Jie Wang

Knowledge graph embedding, which aims to represent entities and relations as low dimensional vectors (or matrices, tensors, etc.), has been shown to be a powerful technique for predicting missing links in knowledge graphs. Existing knowledge graph embedding models mainly focus on modeling relation patterns such as symmetry/antisymmetry, inversion, and composition. However, many existing approaches fail to model semantic hierarchies, which are common in real-world applications. To address this challenge, we propose a novel knowledge graph embedding model—namely, Hierarchy-Aware Knowledge Graph Embedding (HAKE)—which maps entities into the polar coordinate system. HAKE is inspired by the fact that concentric circles in the polar coordinate system can naturally reflect the hierarchy. Specifically, the radial coordinate aims to model entities at different levels of the hierarchy, and entities with smaller radii are expected to be at higher levels; the angular coordinate aims to distinguish entities at the same level of the hierarchy, and these entities are expected to have roughly the same radii but different angles. Experiments demonstrate that HAKE can effectively model the semantic hierarchies in knowledge graphs, and significantly outperforms existing state-of-the-art methods on benchmark datasets for the link prediction task.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 485
Author(s):  
Meihong Wang ◽  
Linling Qiu ◽  
Xiaoli Wang

Knowledge graphs (KGs) have been widely used in the field of artificial intelligence, such as in information retrieval, natural language processing, recommendation systems, etc. However, the open nature of KGs often implies that they are incomplete, having self-defects. This creates the need to build a more complete knowledge graph for enhancing the practical utilization of KGs. Link prediction is a fundamental task in knowledge graph completion that utilizes existing relations to infer new relations so as to build a more complete knowledge graph. Numerous methods have been proposed to perform the link-prediction task based on various representation techniques. Among them, KG-embedding models have significantly advanced the state of the art in the past few years. In this paper, we provide a comprehensive survey on KG-embedding models for link prediction in knowledge graphs. We first provide a theoretical analysis and comparison of existing methods proposed to date for generating KG embedding. Then, we investigate several representative models that are classified into five categories. Finally, we conducted experiments on two benchmark datasets to report comprehensive findings and provide some new insights into the strengths and weaknesses of existing models.


2022 ◽  
Author(s):  
Simon Ott ◽  
Adriano Barbosa-Silva ◽  
Matthias Samwald

Machine learning algorithms for link prediction can be valuable tools for hypothesis generation. However, many current algorithms are black boxes or lack good user interfaces that could facilitate insight into why predictions are made. We present LinkExplorer, a software suite for predicting, explaining and exploring links in large biomedical knowledge graphs. LinkExplorer integrates our novel, rule-based link prediction engine SAFRAN, which was recently shown to outcompete other explainable algorithms and established black box algorithms. Here, we demonstrate highly competitive evaluation results of our algorithm on multiple large biomedical knowledge graphs, and release a web interface that allows for interactive and intuitive exploration of predicted links and their explanations.


Author(s):  
Fuxiang Zhang ◽  
Xin Wang ◽  
Zhao Li ◽  
Jianxin Li

Representation learning of knowledge graphs aims to project both entities and relations as vectors in a continuous low-dimensional space. Relation Hierarchical Structure (RHS), which is constructed by a generalization relationship named subRelationOf between relations, can improve the overall performance of knowledge representation learning. However, most of the existing methods ignore this critical information, and a straightforward way of considering RHS may have a negative effect on the embeddings and thus reduce the model performance. In this paper, we propose a novel method named TransRHS, which is able to incorporate RHS seamlessly into the embeddings. More specifically, TransRHS encodes each relation as a vector together with a relation-specific sphere in the same space. Our TransRHS employs the relative positions among the vectors and spheres to model the subRelationOf, which embodies the inherent generalization relationships among relations. We evaluate our model on two typical tasks, i.e., link prediction and triple classification. The experimental results show that our TransRHS model significantly outperforms all baselines on both tasks, which verifies that the RHS information is significant to representation learning of knowledge graphs, and TransRHS can effectively and efficiently fuse RHS into knowledge graph embeddings.


2012 ◽  
Vol 58 (2) ◽  
pp. 177-192 ◽  
Author(s):  
Marek Parfieniuk ◽  
Alexander Petrovsky

Near-Perfect Reconstruction Oversampled Nonuniform Cosine-Modulated Filter Banks Based on Frequency Warping and Subband MergingA novel method for designing near-perfect reconstruction oversampled nonuniform cosine-modulated filter banks is proposed, which combines frequency warping and subband merging, and thus offers more flexibility than known techniques. On the one hand, desirable frequency partitionings can be better approximated. On the other hand, at the price of only a small loss in partitioning accuracy, both warping strength and number of channels before merging can be adjusted so as to minimize the computational complexity of a system. In particular, the coefficient of the function behind warping can be constrained to be a negative integer power of two, so that multiplications related to allpass filtering can be replaced with more efficient binary shifts. The main idea is accompanied by some contributions to the theory of warped filter banks. Namely, group delay equalization is thoroughly investigated, and it is shown how to avoid significant aliasing by channel oversampling. Our research revolves around filter banks for perceptual processing of sound, which are required to approximate the psychoacoustic scales well and need not guarantee perfect reconstruction.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1407
Author(s):  
Peng Wang ◽  
Jing Zhou ◽  
Yuzhang Liu ◽  
Xingchen Zhou

Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.


2021 ◽  
Author(s):  
Mojtaba Nayyeri ◽  
Gokce Muge Cil ◽  
Sahar Vahdati ◽  
Francesco Osborne ◽  
Mahfuzur Rahman ◽  
...  

2019 ◽  
Vol 24 (1-2) ◽  
pp. 108-117
Author(s):  
Khoma V.V. ◽  
◽  
Khoma Y.V. ◽  
Khoma P.P. ◽  
Sabodashko D.V. ◽  
...  

A novel method for ECG signal outlier processing based on autoencoder neural networks is presented in the article. Typically, heartbeats with serious waveform distortions are treated as outliers and are skipped from the authentication pipeline. The main idea of the paper is to correct these waveform distortions rather them in order to provide the system with better statistical base. During the experiments, the optimum autoencoder architecture was selected. An open Physionet ECGID database was used to verify the proposed method. The results of the studies were compared with previous studies that considered the correction of anomalies based on a statistical approach. On the one hand, the autoencoder shows slightly lower accuracy than the statistical method, but it greatly simplifies the construction of biometric identification systems, since it does not require precise tuning of hyperparameters.


2020 ◽  
Author(s):  
Albert A Gayle

Year-to-year emergence of West Nile virus has been sporadic and notoriously hard to predict. In Europe, 2018 saw a dramatic increase in the number of cases and locations affected. In this work, we demonstrate a novel method for predicting outbreaks and understanding what drives them. This method creates a simple model for each region that directly explains how each variable affects risk. Behind the scenes, each local explanation model is produced by a state-of-the-art AI engine. This engine unpacks and restructures output from an XGBoost machine learning ensemble. XGBoost, well-known for its predictive accuracy, has always been considered a "black box" system. Not any more. With only minimal data curation and no "tuning", our model predicted where the 2018 outbreak would occur with an AUC of 97%. This model was trained using data from 2010-2016 that reflected many domains of knowledge. Climate, sociodemographic, economic, and biodiversity data were all included. Our model furthermore explained the specific drivers of the 2018 outbreak for each affected region. These effect predictions were found to be consistent with the research literature in terms of priority, direction, magnitude, and size of effect. Aggregation and statistical analysis of local effects revealed strong cross-scale interactions. From this, we concluded that the 2018 outbreak was driven by large-scale climatic anomalies enhancing the local effect of mosquito vectors. We also identified substantial areas across Europe at risk for sudden outbreak, similar to that experienced in 2018. Taken as a whole, these findings highlight the role of climate in the emergence and transmission of West Nile virus. Furthermore, they demonstrate the crucial role that the emerging "eXplainable AI" (XAI) paradigm will have in predicting and controlling disease.


2019 ◽  
pp. 129-141 ◽  
Author(s):  
Hui Xian Chia

This article examines the use of artificial intelligence (AI) and deep learning, specifically, to create financial robo-advisers. These machines have the potential to be perfectly honest fiduciaries, acting in their client’s best interests without conflicting self-interest or greed, unlike their human counterparts. However, the application of AI technology to create financial robo-advisers is not without risk. This article will focus on the unique risks posed by deep learning technology. One of the main fears regarding deep learning is that it is a “black box”, its decision-making process is opaque and not open to scrutiny even by the people who developed it. This poses a significant challenge to financial regulators, whom would not be able to examine the underlying rationale and rules of the robo-adviser to determine its safety for public use. The rise of deep learning has been met with calls for ‘explainability’ of how deep learning agents make their decisions. This paper argues that greater explainability can be achieved by describing the ‘personality’ of deep learning robo-advisers, and further proposes a framework for describing the parameters of the deep learning model using concepts that can be readily understood by people without technical expertise. This regards whether the robo-adviser is ‘greedy’, ‘selfish’ or ‘prudent’. Greater understanding will enable regulators and consumers to better judge the safety and suitability of deep learning financial robo-advisers.


Sign in / Sign up

Export Citation Format

Share Document