scholarly journals Learning to Complete Knowledge Graphs with Deep Sequential Models

2019 ◽  
Vol 1 (3) ◽  
pp. 289-308 ◽  
Author(s):  
Lingbing Guo ◽  
Qingheng Zhang ◽  
Wei Hu ◽  
Zequn Sun ◽  
Yuzhong Qu

Knowledge graph (KG) completion aims at filling the missing facts in a KG, where a fact is typically represented as a triple in the form of ( head, relation, tail). Traditional KG completion methods compel two-thirds of a triple provided (e.g., head and relation) to predict the remaining one. In this paper, we propose a new method that extends multi-layer recurrent neural networks (RNNs) to model triples in a KG as sequences. It obtains state-of-the-art performance on the common entity prediction task, i.e., giving head (or tail) and relation to predict the tail (or the head), using two benchmark data sets. Furthermore, the deep sequential characteristic of our method enables it to predict the relations given head (or tail) only, and even predict the whole triples. Our experiments on these two new KG completion tasks demonstrate that our method achieves superior performance compared with several alternative methods.

2018 ◽  
Vol 8 (12) ◽  
pp. 2421 ◽  
Author(s):  
Chongya Song ◽  
Alexander Pons ◽  
Kang Yen

In the field of network intrusion, malware usually evades anomaly detection by disguising malicious behavior as legitimate access. Therefore, detecting these attacks from network traffic has become a challenge in this an adversarial setting. In this paper, an enhanced Hidden Markov Model, called the Anti-Adversarial Hidden Markov Model (AA-HMM), is proposed to effectively detect evasion pattern, using the Dynamic Window and Threshold techniques to achieve adaptive, anti-adversarial, and online-learning abilities. In addition, a concept called Pattern Entropy is defined and acts as the foundation of AA-HMM. We evaluate the effectiveness of our approach employing two well-known benchmark data sets, NSL-KDD and CTU-13, in terms of the common performance metrics and the algorithm’s adaptation and anti-adversary abilities.


Author(s):  
Xin Zhong ◽  
Frank Y. Shih

In this paper, we present a robust multibit image watermarking scheme to undertake the common image-processing attacks as well as affine distortions. This scheme combines contrast modulation and effective synchronization for large payload and high robustness. We analyze the robustness, payload, and the lower bound of fidelity. Regarding watermark resynchronization under affine distortions, we develop a self-referencing rectification method to detect the distortion parameters for reconstruction by the center of mass in affine covariant regions. The effectiveness and advantages of the proposed scheme are confirmed by experimental results, which show the superior performance as comparing against several state-of-the-art watermarking methods.


2017 ◽  
Vol 117 (1) ◽  
pp. 198-212 ◽  
Author(s):  
Bo Zou ◽  
Feng Guo ◽  
Michael Song

Purpose Although the extant innovation literature has extensively explored the attributes of different types of innovation capability, little is known yet about the common phenomenon of the rebound and durableness of innovation capability. Therefore, the purpose of this paper is to address these aspects by introducing the concepts of elastic and plastic innovation capability. Design/methodology/approach Based on the behavioral theory of the firm, the authors propose a theoretical model to study the antecedents and outcomes of elastic and plastic innovation capability. An empirical testing involves two data sets that contained 183 companies in three industries. The empirical evidence supports the existence of the concepts of elastic and plastic innovation capability. Findings The research findings also demonstrate that a firm’s past performance is positively related to elastic innovation capability. Elastic innovation capability and organizational aspiration are positively related to plastic innovation capability. Both elastic and plastic innovation capability significantly lead to superior performance. Originality/value This study makes three main contributions to the existing innovation literature. First, the authors extend existing knowledge on innovation capability by proposing two new types of innovation capability – elastic and plastic innovation capability. Second, the proposed concepts of elastic and plastic innovation capability contribute to the theory of dynamic capability. Finally, this study reveals the micro-mechanism of elastic and plastic innovation capability from the perspective of the behavior theory of the firm and their different effect on firm performance.


2021 ◽  
Vol 11 (5) ◽  
pp. 2144
Author(s):  
Timothy Sands

Many research manuscripts propose new methodologies, while others compare several state-of-the-art methods to ascertain the best method for a given application. This manuscript does both by introducing deterministic artificial intelligence (D.A.I.) to control direct current motors used by unmanned underwater vehicles (amongst other applications), and directly comparing the performance of three state-of-the-art nonlinear adaptive control techniques. D.A.I. involves the assertion of self-awareness statements and uses optimal (in a 2-norm sense) learning to compensate for the deleterious effects of error sources. This research reveals that deterministic artificial intelligence yields 4.8% lower mean and 211% lower standard deviation of tracking errors as compared to the best modeling method investigated (indirect self-tuner without process zero cancellation and minimum phase plant). The improved performance cannot be attributed to superior estimation. Coefficient estimation was merely on par with the best alternative methods; some coefficients were estimated more accurately, others less. Instead, the superior performance seems to be attributable to the modeling method. One noteworthy feature is that D.A.I. very closely followed a challenging square wave without overshoot—successfully settling at each switch of the square wave—while all of the other state-of-the-art methods were unable to do so.


2021 ◽  
Author(s):  
Jiahua Rao ◽  
Shuangjia Zheng ◽  
Ying Song ◽  
Jianwen Chen ◽  
Chengtao Li ◽  
...  

AbstractSummaryRecently, novel representation learning algorithms have shown potential for predicting molecular properties. However, unified frameworks have not yet emerged for fairly measuring algorithmic progress, and experimental procedures of different representation models often lack rigorousness and are hardly reproducible. Herein, we have developed MolRep by unifying 16 state-of-the-art models across 4 popular molecular representations for application and comparison. Furthermore, we ran more than 12.5 million experiments to optimize hyperparameters for each method on 12 common benchmark data sets. As a result, CMPNN achieves the best results ranked the 1st in 5 out of 12 tasks with an average rank of 1.75. Relatively, ECC has good performance in classification tasks and MAT good for regression (both ranked 1st for 3 tasks) with an average rank of 2.71 and 2.6, respectively.AvailabilityThe source code is available at: https://github.com/biomed-AI/MolRepSupplementary informationSupplementary data are available online.


2020 ◽  
Author(s):  
Y Sun ◽  
Bing Xue ◽  
Mengjie Zhang ◽  
GG Yen

© 2019 IEEE. The performance of convolutional neural networks (CNNs) highly relies on their architectures. In order to design a CNN with promising performance, extensive expertise in both CNNs and the investigated problem domain is required, which is not necessarily available to every interested user. To address this problem, we propose to automatically evolve CNN architectures by using a genetic algorithm (GA) based on ResNet and DenseNet blocks. The proposed algorithm is completely automatic in designing CNN architectures. In particular, neither preprocessing before it starts nor postprocessing in terms of CNNs is needed. Furthermore, the proposed algorithm does not require users with domain knowledge on CNNs, the investigated problem, or even GAs. The proposed algorithm is evaluated on the CIFAR10 and CIFAR100 benchmark data sets against 18 state-of-the-art peer competitors. Experimental results show that the proposed algorithm outperforms the state-of-the-art CNNs hand-crafted and the CNNs designed by automatic peer competitors in terms of the classification performance and achieves a competitive classification accuracy against semiautomatic peer competitors. In addition, the proposed algorithm consumes much less computational resource than most peer competitors in finding the best CNN architectures.


2020 ◽  
Author(s):  
Y Sun ◽  
Bing Xue ◽  
Mengjie Zhang ◽  
GG Yen

© 2019 IEEE. The performance of convolutional neural networks (CNNs) highly relies on their architectures. In order to design a CNN with promising performance, extensive expertise in both CNNs and the investigated problem domain is required, which is not necessarily available to every interested user. To address this problem, we propose to automatically evolve CNN architectures by using a genetic algorithm (GA) based on ResNet and DenseNet blocks. The proposed algorithm is completely automatic in designing CNN architectures. In particular, neither preprocessing before it starts nor postprocessing in terms of CNNs is needed. Furthermore, the proposed algorithm does not require users with domain knowledge on CNNs, the investigated problem, or even GAs. The proposed algorithm is evaluated on the CIFAR10 and CIFAR100 benchmark data sets against 18 state-of-the-art peer competitors. Experimental results show that the proposed algorithm outperforms the state-of-the-art CNNs hand-crafted and the CNNs designed by automatic peer competitors in terms of the classification performance and achieves a competitive classification accuracy against semiautomatic peer competitors. In addition, the proposed algorithm consumes much less computational resource than most peer competitors in finding the best CNN architectures.


2021 ◽  
Author(s):  
Stefan Canzar ◽  
Van Hoan Do ◽  
Slobodan Jelic ◽  
Soeren Laue ◽  
Domagoj Matijevic ◽  
...  

Metric multidimensional scaling is one of the classical methods for embedding data into low-dimensional Euclidean space. It creates the low-dimensional embedding by approximately preserving the pairwise distances between the input points. However, current state-of-the-art approaches only scale to a few thousand data points. For larger data sets such as those occurring in single-cell RNA sequencing experiments, the running time becomes prohibitively large and thus alternative methods such as PCA are widely used instead. Here, we propose a neural network based approach for solving the metric multidimensional scaling problem that is orders of magnitude faster than previous state-of-the-art approaches, and hence scales to data sets with up to a few million cells. At the same time, it provides a non-linear mapping between high- and low-dimensional space that can place previously unseen cells in the same embedding.


Author(s):  
Zhitao Wang ◽  
Wenjie Li

A series of recent studies formulated the diffusion prediction problem as a sequence prediction task and proposed several sequential models based on recurrent neural networks. However, non-sequential properties exist in real diffusion cascades, which do not strictly follow the sequential assumptions of previous work. In this paper, we propose a hierarchical diffusion attention network (HiDAN), which adopts a non-sequential framework and two-level attention mechanisms, for diffusion prediction. At the user level, a dependency attention mechanism is proposed to dynamically capture historical user-to-user dependencies and extract the dependency-aware user information. At the cascade (i.e., sequence) level, a time-aware influence attention is designed to infer possible future user's dependencies on historical users by considering both inherent user importance and time decay effects. Significantly higher effectiveness and efficiency of HiDAN over state-of-the-art sequential models are demonstrated when evaluated on three real diffusion datasets. The further case studies illustrate that HiDAN can accurately capture diffusion dependencies.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Wenyun Gao ◽  
Sheng Dai ◽  
Stanley Ebhohimhen Abhadiomhen ◽  
Wei He ◽  
Xinghui Yin

Correlation learning is a technique utilized to find a common representation in cross-domain and multiview datasets. However, most existing methods are not robust enough to handle noisy data. As such, the common representation matrix learned could be influenced easily by noisy samples inherent in different instances of the data. In this paper, we propose a novel correlation learning method based on a low-rank representation, which learns a common representation between two instances of data in a latent subspace. Specifically, we begin by learning a low-rank representation matrix and an orthogonal rotation matrix to handle the noisy samples in one instance of the data so that a second instance of the data can linearly reconstruct the low-rank representation. Our method then finds a similarity matrix that approximates the common low-rank representation matrix much better such that a rank constraint on the Laplacian matrix would reveal the clustering structure explicitly without any spectral postprocessing. Extensive experimental results on ORL, Yale, Coil-20, Caltech 101-20, and UCI digits datasets demonstrate that our method has superior performance than other state-of-the-art compared methods in six evaluation metrics.


Sign in / Sign up

Export Citation Format

Share Document