An Ontology and Knowledge Graph Infrastructure for Digital Library Knowledge Representation

Author(s):  
Stefano Ferilli ◽  
Domenico Redavid
2021 ◽  
Vol 41 (2) ◽  
pp. 3603-3613
Author(s):  
Jin Dong ◽  
Jian Wang ◽  
Sen Chen

Manufacturing industry is the foundation of a country’s economic development and prosperity. At present, the data in manufacturing enterprises have the problems of weak correlation and high redundancy, which can be solved effectively by knowledge graph. In this paper, a method of knowledge graph construction in manufacturing domain based on knowledge enhanced word embedding model is proposed. The main contributions are as follows: (1) At the algorithmic level, this paper proposes KEWE-BERT, an end-to-end model for joint entity and relation extraction, which superimposes the token embedding and knowledge embedding output by BERT and TransR so as to improve the effect of knowledge extraction; (2) At the application level, knowledge representation model ManuOnto and dataset ManuDT are constructed based on real manufacturing scenarios, and KEWE-BERT is used to construct knowledge graph from them. The knowledge graph constructed has rich semantic relations, which can be applied in actual production environment. Other than that, KEWE-BERT can extract effective knowledge and patterns from redundant texts in the enterprise, which providing a solution for enterprise data management.


Author(s):  
Ruobing Xie ◽  
Zhiyuan Liu ◽  
Huanbo Luan ◽  
Maosong Sun

Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.


Author(s):  
Jinjiao Lin ◽  
Yanze Zhao ◽  
Weiyuan Huang ◽  
Chunfang Liu ◽  
Haitao Pu

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yang He ◽  
Ling Tian ◽  
Lizong Zhang ◽  
Xi Zeng

Autonomous object detection powered by cutting-edge artificial intelligent techniques has been an essential component for sustaining complex smart city systems. Fine-grained image classification focuses on recognizing subcategories of specific levels of images. As a result of the high similarity between images in the same category and the high dissimilarity in the same subcategories, it has always been a challenging problem in computer vision. Traditional approaches usually rely on exploring only the visual information in images. Therefore, this paper proposes a novel Knowledge Graph Representation Fusion (KGRF) framework to introduce prior knowledge into fine-grained image classification task. Specifically, the Graph Attention Network (GAT) is employed to learn the knowledge representation from the constructed knowledge graph modeling the categories-subcategories and subcategories-attributes associations. By introducing the Multimodal Compact Bilinear (MCB) module, the framework can fully integrate the knowledge representation and visual features for learning the high-level image features. Extensive experiments on the Caltech-UCSD Birds-200-2011 dataset verify the superiority of our proposed framework over several existing state-of-the-art methods.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Dong Zhong ◽  
Yi-An Zhu ◽  
Lanqing Wang ◽  
Junhua Duan ◽  
Jiaxuan He

The information in the working environment of industrial Internet is characterized by diversity, semantics, hierarchy, and relevance. However, the existing representation methods of environmental information mostly emphasize the concepts and relationships in the environment and have an insufficient understanding of the items and relationships at the instance level. There are also some problems such as low visualization of knowledge representation, poor human-machine interaction ability, insufficient knowledge reasoning ability, and slow knowledge search speed, which cannot meet the needs of intelligent and personalized service. Based on this, this paper designs a cognitive information representation model based on a knowledge graph, which combines the perceptual information of industrial robot ontology with semantic description information such as functional attributes obtained from the Internet to form a structured and logically reasoned cognitive knowledge graph including perception layer and cognition layer. Aiming at the problem that the data sources of the knowledge base for constructing the cognitive knowledge graph are wide and heterogeneous, and there are entity semantic differences and knowledge system differences among different data sources, a multimodal entity semantic fusion model based on vector features and a system fusion framework based on HowNet are designed, and the environment description information such as object semantics, attributes, relations, spatial location, and context acquired by industrial robots and their own state information are unified and standardized. The automatic representation of robot perceived information is realized, and the universality, systematicness, and intuition of robot cognitive information representation are enhanced, so that the cognition reasoning ability and knowledge retrieval efficiency of robots in the industrial Internet environment can be effectively improved.


Author(s):  
Xinhua Suo ◽  
Bing Guo ◽  
Yan Shen ◽  
Wei Wang ◽  
Yaosen Chen ◽  
...  

Knowledge representation learning (knowledge graph embedding) plays a critical role in the application of knowledge graph construction. The multi-source information knowledge representation learning, which is one class of the most promising knowledge representation learning at present, mainly focuses on learning a large number of useful additional information of entities and relations in the knowledge graph into their embeddings, such as the text description information, entity type information, visual information, graph structure information, etc. However, there is a kind of simple but very common information — the number of an entity’s relations which means the number of an entity’s semantic types has been ignored. This work proposes a multi-source knowledge representation learning model KRL-NER, which embodies information of the number of an entity’s relations between entities into the entities’ embeddings through the attention mechanism. Specifically, first of all, we design and construct a submodel of the KRL-NER LearnNER which learns an embedding including the information on the number of an entity’s relations; then, we obtain a new embedding by exerting attention onto the embedding learned by the models such as TransE with this embedding; finally, we translate based onto the new embedding. Experiments, such as related tasks on knowledge graph: entity prediction, entity prediction under different relation types, and triple classification, are carried out to verify our model. The results show that our model is effective on the large-scale knowledge graphs, e.g. FB15K.


2021 ◽  
Vol 3 (4) ◽  
pp. 802-818
Author(s):  
M.V.P.T. Lakshika ◽  
H.A. Caldera

E-newspaper readers are overloaded with massive texts on e-news articles, and they usually mislead the reader who reads and understands information. Thus, there is an urgent need for a technology that can automatically represent the gist of these e-news articles more quickly. Currently, popular machine learning approaches have greatly improved presentation accuracy compared to traditional methods, but they cannot be accommodated with the contextual information to acquire higher-level abstraction. Recent research efforts in knowledge representation using graph approaches are neither user-driven nor flexible to deviations in the data. Thus, there is a striking concentration on constructing knowledge graphs by combining the background information related to the subjects in text documents. We propose an enhanced representation of a scalable knowledge graph by automatically extracting the information from the corpus of e-news articles and determine whether a knowledge graph can be used as an efficient application in analyzing and generating knowledge representation from the extracted e-news corpus. This knowledge graph consists of a knowledge base built using triples that automatically produce knowledge representation from e-news articles. Inclusively, it has been observed that the proposed knowledge graph generates a comprehensive and precise knowledge representation for the corpus of e-news articles.


2021 ◽  
Vol 15 ◽  
Author(s):  
Francisco Martín ◽  
Jonatan Ginés ◽  
Francisco J. Rodríguez-Lera ◽  
Angel M. Guerrero-Higueras ◽  
Vicente Matellán Olivera

This paper proposes a novel system for managing visual attention in social robots. This system is based on a client/server approach that allows integration with a cognitive architecture controlling the robot. The core of this architecture is a distributed knowledge graph, in which the perceptual needs are expressed by the presence of arcs to stimuli that need to be perceived. The attention server sends motion commands to the actuators of the robot, while the attention clients send requests through the common knowledge representation. The common knowledge graph is shared by all levels of the architecture. This system has been implemented on ROS and tested on a social robot to verify the validity of the approach and was used to solve the tests proposed in RoboCup @ Home and SciROc robotic competitions. The tests have been used to quantitatively compare the proposal to traditional visual attention mechanisms.


Sign in / Sign up

Export Citation Format

Share Document