scholarly journals Story Generation Using Knowledge Graph under Psychological States

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Feifei Xu ◽  
Xinpeng Wang ◽  
Shanlin Zhou

Story generation, aiming to generate a story that people could understand easily, captures increasing researchers’ attention in recent years. However, a good story usually requires interesting and emotional plots. Previous works only consider a specific or binary emotion like positive or negative. In our work, we propose a Knowledge-Aware Generation framework under Controllable CondItions (K-GuCCI). The model assigns a change line of psychological states to story characters, which makes the story develop following the setting. Besides, we incorporate the knowledge graph into the model to facilitate the coherence of the story. Moreover, we investigate a metric AGPS to evaluate the accuracy of generated stories’ psychological states. Experiments exhibit that the proposed model improves over standard benchmarks, while also generating stories reliable and valid.


Author(s):  
Peilian Zhao ◽  
Cunli Mao ◽  
Zhengtao Yu

Aspect-Based Sentiment Analysis (ABSA), a fine-grained task of opinion mining, which aims to extract sentiment of specific target from text, is an important task in many real-world applications, especially in the legal field. Therefore, in this paper, we study the problem of limitation of labeled training data required and ignorance of in-domain knowledge representation for End-to-End Aspect-Based Sentiment Analysis (E2E-ABSA) in legal field. We proposed a new method under deep learning framework, named Semi-ETEKGs, which applied E2E framework using knowledge graph (KG) embedding in legal field after data augmentation (DA). Specifically, we pre-trained the BERT embedding and in-domain KG embedding for unlabeled data and labeled data with case elements after DA, and then we put two embeddings into the E2E framework to classify the polarity of target-entity. Finally, we built a case-related dataset based on a popular benchmark for ABSA to prove the efficiency of Semi-ETEKGs, and experiments on case-related dataset from microblog comments show that our proposed model outperforms the other compared methods significantly.



Author(s):  
Bayu Distiawan Trisedya ◽  
Jianzhong Qi ◽  
Rui Zhang

The task of entity alignment between knowledge graphs aims to find entities in two knowledge graphs that represent the same real-world entity. Recently, embedding-based models are proposed for this task. Such models are built on top of a knowledge graph embedding model that learns entity embeddings to capture the semantic similarity between entities in the same knowledge graph. We propose to learn embeddings that can capture the similarity between entities in different knowledge graphs. Our proposed model helps align entities from different knowledge graphs, and hence enables the integration of multiple knowledge graphs. Our model exploits large numbers of attribute triples existing in the knowledge graphs and generates attribute character embeddings. The attribute character embedding shifts the entity embeddings from two knowledge graphs into the same space by computing the similarity between entities based on their attributes. We use a transitivity rule to further enrich the number of attributes of an entity to enhance the attribute character embedding. Experiments using real-world knowledge bases show that our proposed model achieves consistent improvements over the baseline models by over 50% in terms of hits@1 on the entity alignment task.



2020 ◽  
Vol 34 (05) ◽  
pp. 9612-9619
Author(s):  
Zhao Zhang ◽  
Fuzhen Zhuang ◽  
Hengshu Zhu ◽  
Zhiping Shi ◽  
Hui Xiong ◽  
...  

The rapid proliferation of knowledge graphs (KGs) has changed the paradigm for various AI-related applications. Despite their large sizes, modern KGs are far from complete and comprehensive. This has motivated the research in knowledge graph completion (KGC), which aims to infer missing values in incomplete knowledge triples. However, most existing KGC models treat the triples in KGs independently without leveraging the inherent and valuable information from the local neighborhood surrounding an entity. To this end, we propose a Relational Graph neural network with Hierarchical ATtention (RGHAT) for the KGC task. The proposed model is equipped with a two-level attention mechanism: (i) the first level is the relation-level attention, which is inspired by the intuition that different relations have different weights for indicating an entity; (ii) the second level is the entity-level attention, which enables our model to highlight the importance of different neighboring entities under the same relation. The hierarchical attention mechanism makes our model more effective to utilize the neighborhood information of an entity. Finally, we extensively validate the superiority of RGHAT against various state-of-the-art baselines.



2020 ◽  
pp. 016555152093251
Author(s):  
Haoze Yu ◽  
Haisheng Li ◽  
Dianhui Mao ◽  
Qiang Cai

In order to achieve real-time updating of the domain knowledge graph and improve the relationship extraction ability in the construction process, a domain knowledge graph construction method is proposed. Based on the structured knowledge in Wikipedia’s classification system, we acquire concepts and instances contained in subject areas. A relationship extraction algorithm based on co-word analysis is intended to extract the classification relationships in semi-structured open labels. A Bi-GRU remote supervised relationship extraction model based on a multiple-scale attention mechanism and an improved cross-entropy loss function is proposed to obtain the non-classification relationships of concepts in unstructured texts. Experiments show that the proposed model performs better than the existing methods. Based on the obtained concepts, instances and relationships, a domain knowledge graph is constructed and the domain-independent nodes and relationships contained in them are removed through a vector variance algorithm. The effectiveness of the proposed method is verified by constructing a food domain knowledge graph based on Wikipedia.



2020 ◽  
Vol 10 (8) ◽  
pp. 2651
Author(s):  
Su Jeong Choi ◽  
Hyun-Je Song ◽  
Seong-Bae Park

Knowledge bases such as Freebase, YAGO, DBPedia, and Nell contain a number of facts with various entities and relations. Since they store many facts, they are regarded as core resources for many natural language processing tasks. Nevertheless, they are not normally complete and have many missing facts. Such missing facts keep them from being used in diverse applications in spite of their usefulness. Therefore, it is significant to complete knowledge bases. Knowledge graph embedding is one of the promising approaches to completing a knowledge base and thus many variants of knowledge graph embedding have been proposed. It maps all entities and relations in knowledge base onto a low dimensional vector space. Then, candidate facts that are plausible in the space are determined as missing facts. However, any single knowledge graph embedding is insufficient to complete a knowledge base. As a solution to this problem, this paper defines knowledge base completion as a ranking task and proposes a committee-based knowledge graph embedding model for improving the performance of knowledge base completion. Since each knowledge graph embedding has its own idiosyncrasy, we make up a committee of various knowledge graph embeddings to reflect various perspectives. After ranking all candidate facts according to their plausibility computed by the committee, the top-k facts are chosen as missing facts. Our experimental results on two data sets show that the proposed model achieves higher performance than any single knowledge graph embedding and shows robust performances regardless of k. These results prove that the proposed model considers various perspectives in measuring the plausibility of candidate facts.



2020 ◽  
Vol 34 (10) ◽  
pp. 13953-13954
Author(s):  
Xu Wang ◽  
Shuai Zhao ◽  
Bo Cheng ◽  
Jiale Han ◽  
Yingting Li ◽  
...  

Multi-hop question answering models based on knowledge graph have been extensively studied. Most existing models predict a single answer with the highest probability by ranking candidate answers. However, they are stuck in predicting all the right answers caused by the ranking method. In this paper, we propose a novel model that converts the ranking of candidate answers into individual predictions for each candidate, named heterogeneous knowledge graph based multi-hop and multi-answer model (HGMAN). HGMAN is capable of capturing more informative representations for relations assisted by our heterogeneous graph, which consists of multiple entity nodes and relation nodes. We rely on graph convolutional network for multi-hop reasoning and then binary classification for each node to get multiple answers. Experimental results on MetaQA dataset show the performance of our proposed model over all baselines.



2020 ◽  
Vol 34 (07) ◽  
pp. 10575-10582
Author(s):  
Riquan Chen ◽  
Tianshui Chen ◽  
Xiaolu Hui ◽  
Hefeng Wu ◽  
Guanbin Li ◽  
...  

Few-shot learning aims to learn novel categories from very few samples given some base categories with sufficient training samples. The main challenge of this task is the novel categories are prone to dominated by color, texture, shape of the object or background context (namely specificity), which are distinct for the given few training samples but not common for the corresponding categories (see Figure 1). Fortunately, we find that transferring information of the correlated based categories can help learn the novel concepts and thus avoid the novel concept being dominated by the specificity. Besides, incorporating semantic correlations among different categories can effectively regularize this information transfer. In this work, we represent the semantic correlations in the form of structured knowledge graph and integrate this graph into deep neural networks to promote few-shot learning by a novel Knowledge Graph Transfer Network (KGTN). Specifically, by initializing each node with the classifier weight of the corresponding category, a propagation mechanism is learned to adaptively propagate node message through the graph to explore node interaction and transfer classifier information of the base categories to those of the novel ones. Extensive experiments on the ImageNet dataset show significant performance improvement compared with current leading competitors. Furthermore, we construct an ImageNet-6K dataset that covers larger scale categories, i.e, 6,000 categories, and experiments on this dataset further demonstrate the effectiveness of our proposed model.



Author(s):  
Juntao Li ◽  
Lidong Bing ◽  
Lisong Qiu ◽  
Dongmin Chen ◽  
Dongyan Zhao ◽  
...  

Automatic story generation is a challenging task, which involves automatically comprising a sequence of sentences or words with a consistent topic and novel wordings. Although many attention has been paid to this task and prompting progress has been made, there still exists a noticeable gap between generated stories and those created by humans, especially in terms of thematic consistency and wording novelty. To fill this gap, we propose a cache-augmented conditional variational autoencoder for story generation, where the cache module allows to improve thematic consistency while the conditional variational autoencoder part is used for generating stories with less common words by using a continuous latent variable. For combing the cache module and the autoencoder part, we further introduce an effective gate mechanism. Experimental results on ROCStories and WritingPrompts indicate that our proposed model can generate stories with consistency and wording novelty, and outperforms existing models under both automatic metrics and human evaluations.



2021 ◽  
pp. 1-12
Author(s):  
Xiaojun Chen ◽  
Ling Ding ◽  
Yang Xiang

Knowledge graph reasoning or completion aims at inferring missing facts based on existing ones in a knowledge graph. In this work, we focus on the problem of open-world knowledge graph reasoning—a task that reasons about entities which are absent from KG at training time (unseen entities). Unfortunately, the performance of most existing reasoning methods on this problem turns out to be unsatisfactory. Recently, some works use graph convolutional networks to obtain the embeddings of unseen entities for prediction tasks. Graph convolutional networks gather information from the entity’s neighborhood, however, they neglect the unequal natures of neighboring nodes. To resolve this issue, we present an attention-based method named as NAKGR, which leverages neighborhood information to generate entities and relations representations. The proposed model is an encoder-decoder architecture. Specifically, the encoder devises an graph attention mechanism to aggregate neighboring nodes’ information with a weighted combination. The decoder employs an energy function to predict the plausibility for each triplets. Benchmark experiments show that NAKGR achieves significant improvements on the open-world reasoning tasks. In addition, our model also performs well on the closed-world reasoning tasks.



Author(s):  
Fanshuang Kong ◽  
Richong Zhang ◽  
Yongyi Mao ◽  
Ting Deng

Embedding based models for knowledge base completion have demonstrated great successes and attracted significant research interest. In this work, we observe that existing embedding models all have their loss functions decomposed into atomic loss functions, each on a triple or an postulated edge in the knowledge graph. Such an approach essentially implies that conditioned on the embeddings of the triple, whether the triple is factual is independent of the structure of the knowledge graph. Although arguably the embeddings of the entities and relation in the triple contain certain structural information of the knowledge base, we believe that the global information contained in the embeddings of the triple can be insufficient and such an assumption is overly optimistic in heterogeneous knowledge bases. Motivated by this understanding, in this work we propose a new embedding model in which we discard the assumption that the embeddings of the entities and relation in a triple is a sufficient statistic for the triple’s factual existence. More specifically, the proposed model assumes that whether a triple is factual depends not only on the embedding of the triple but also on the embeddings of the entities and relations in a larger graph neighbourhood. In this model, attention mechanisms are constructed to select the relevant information in the graph neighbourhood so that irrelevant signals in the neighbourhood are suppressed. Termed locality-expanded neural embedding with attention (LENA), this model is tested on four standard datasets and compared with several stateof-the-art models for knowledge base completion. Extensive experiments suggest that LENA outperforms the existing models in virtually every metric.



Sign in / Sign up

Export Citation Format

Share Document