Incremental Knowledge Construction for Real-World Event Understanding

Author(s):  
Koji Kamei ◽  
Yutaka Yanagisawa ◽  
Takuya Maekawa ◽  
Yasue Kishino ◽  
Yasushi Sakurai ◽  
...  

The construction of real-world knowledge is required if we are to understand real-world events that occur in a networked sensor environment. Since it is difficult to select suitable ‘events’ for recognition in a sensor environment a priori, we propose an incremental model for constructing real-world knowledge. Labeling is the central plank of the proposed model because the model simultaneously improves both the ontology of real-world events and the implementation of a sensor system based on a manually labeled event corpus. A labeling tool is developed in accordance with the model and is evaluated in a practical labeling experiment.

Author(s):  
Koji Kamei ◽  
Yutaka Yanagisawa ◽  
Takuya Maekawa ◽  
Yasue Kishino ◽  
Yasushi Sakurai ◽  
...  

The construction of real-world knowledge is required if we are to understand real-world events that occur in a networked sensor environment. Since it is difficult to select suitable ‘events’ for recognition in a sensor environment a priori, we propose an incremental model for constructing real-world knowledge. Labeling is the central plank of the proposed model because the model simultaneously improves both the ontology of real-world events and the implementation of a sensor system based on a manually labeled event corpus. A labeling tool is developed in accordance with the model and is evaluated in a practical labeling experiment.


Author(s):  
Bayu Distiawan Trisedya ◽  
Jianzhong Qi ◽  
Rui Zhang

The task of entity alignment between knowledge graphs aims to find entities in two knowledge graphs that represent the same real-world entity. Recently, embedding-based models are proposed for this task. Such models are built on top of a knowledge graph embedding model that learns entity embeddings to capture the semantic similarity between entities in the same knowledge graph. We propose to learn embeddings that can capture the similarity between entities in different knowledge graphs. Our proposed model helps align entities from different knowledge graphs, and hence enables the integration of multiple knowledge graphs. Our model exploits large numbers of attribute triples existing in the knowledge graphs and generates attribute character embeddings. The attribute character embedding shifts the entity embeddings from two knowledge graphs into the same space by computing the similarity between entities based on their attributes. We use a transitivity rule to further enrich the number of attributes of an entity to enhance the attribute character embedding. Experiments using real-world knowledge bases show that our proposed model achieves consistent improvements over the baseline models by over 50% in terms of hits@1 on the entity alignment task.


2021 ◽  
pp. 1-21
Author(s):  
Sundas Shahzadi ◽  
Areen Rasool ◽  
Musavarah Sarwar ◽  
Muhammad Akram

Bipolarity plays a key role in different domains such as technology, social networking and biological sciences for illustrating real-world phenomenon using bipolar fuzzy models. In this article, novel concepts of bipolar fuzzy competition hypergraphs are introduced and discuss the application of the proposed model. The main contribution is to illustrate different methods for the construction of bipolar fuzzy competition hypergraphs and their variants. Authors study various new concepts including bipolar fuzzy row hypergraphs, bipolar fuzzy column hypergraphs, bipolar fuzzy k-competition hypergraphs, bipolar fuzzy neighborhood hypergraphs and strong hyperedges. Besides, we develop some relations between bipolar fuzzy k-competition hypergraphs and bipolar fuzzy neighborhood hypergraphs. Moreover, authors design an algorithm to compute the strength of competition among companies in business market. A comparative analysis of the proposed model is discuss with the existing models such bipolar fuzzy competition graphs and fuzzy competition hypergraphs.


2017 ◽  
Vol 117 (9) ◽  
pp. 1866-1889 ◽  
Author(s):  
Vahid Shokri Kahi ◽  
Saeed Yousefi ◽  
Hadi Shabanpour ◽  
Reza Farzipoor Saen

Purpose The purpose of this paper is to develop a novel network and dynamic data envelopment analysis (DEA) model for evaluating sustainability of supply chains. In the proposed model, all links can be considered in calculation of efficiency score. Design/methodology/approach A dynamic DEA model to evaluate sustainable supply chains in which networks have series structure is proposed. Nature of free links is defined and subsequently applied in calculating relative efficiency of supply chains. An additive network DEA model is developed to evaluate sustainability of supply chains in several periods. A case study demonstrates applicability of proposed approach. Findings This paper assists managers to identify inefficient supply chains and take proper remedial actions for performance optimization. Besides, overall efficiency scores of supply chains have less fluctuation. By utilizing the proposed model and determining dual-role factors, managers can plan their supply chains properly and more accurately. Research limitations/implications In real world, managers face with big data. Therefore, we need to develop an approach to deal with big data. Practical implications The proposed model offers useful managerial implications along with means for managers to monitor and measure efficiency of their production processes. The proposed model can be applied in real world problems in which decision makers are faced with multi-stage processes such as supply chains, production systems, etc. Originality/value For the first time, the authors present additive model of network-dynamic DEA. For the first time, the authors outline the links in a way that carry-overs of networks are connected in different periods and not in different stages.


2020 ◽  
Vol 30 (1) ◽  
Author(s):  
Maryam Nematizadeh ◽  
Alireza Amirteimoori ◽  
Sohrab Kordrostami ◽  
Mohsen Vaez-Ghasemi

In the real world, there are processes whose structures are like a parallel-series mixed network. Network data envelopment analysis (NDEA) is one of the appropriate methods for assessing the performance of processes with these structures. In the paper, mixed processes with two parallel and series components are considered, in which the first component or parallel section consists of the shared in-puts, and the second component or series section consists of undesirable factors. By considering the weak disposability assumption for undesirable factors, a DEA approach as based on network slack-based measure (NSBM) is introduced to evaluate the performance of processes with mixed structures. The proposed model is illustrated with a real case study. Then, the model is developed to discriminate efficient units.


2012 ◽  
Vol 263-266 ◽  
pp. 857-860
Author(s):  
Kuang Jung Tseng

This work presents group decision making model, following a university safety evaluation to demonstrate the effectiveness of the proposed model. Importantly, the proposed model can assist university decision makers to buy the feasibility of digital recorder sensor system, making it highly applicable for academic and commercial purposes.


2021 ◽  
Vol 13 (2) ◽  
pp. 62-84
Author(s):  
Boudjemaa Boudaa ◽  
Djamila Figuir ◽  
Slimane Hammoudi ◽  
Sidi mohamed Benslimane

Collaborative and content-based recommender systems are widely employed in several activity domains helping users in finding relevant products and services (i.e., items). However, with the increasing features of items, the users are getting more demanding in their requirements, and these recommender systems are becoming not able to be efficient for this purpose. Built on knowledge bases about users and items, constraint-based recommender systems (CBRSs) come to meet the complex user requirements. Nevertheless, this kind of recommender systems witnesses a rarity in research and remains underutilised, essentially due to difficulties in knowledge acquisition and/or in their software engineering. This paper details a generic software architecture for the CBRSs development. Accordingly, a prototype mobile application called DATAtourist has been realized using DATAtourisme ontology as a recent real-world knowledge source in tourism. The DATAtourist evaluation under varied usage scenarios has demonstrated its usability and reliability to recommend personalized touristic points of interest.


2021 ◽  
Author(s):  
F. Salis ◽  
S. Bertuletti ◽  
K. Scott ◽  
M. Caruso ◽  
T. Bonci ◽  
...  

Author(s):  
Gary Smith

Humans have invaluable real-world knowledge because we have accumulated a lifetime of experiences that help us recognize, understand, and anticipate. Computers do not have real-world experiences to guide them, so they must rely on statistical patterns in their digital data base—which may be helpful, but is certainly fallible. We use emotions as well as logic to construct concepts that help us understand what we see and hear. When we see a dog, we may visualize other dogs, think about the similarities and differences between dogs and cats, or expect the dog to chase after a cat we see nearby. We may remember a childhood pet or recall past encounters with dogs. Remembering that dogs are friendly and loyal, we might smile and want to pet the dog or throw a stick for the dog to fetch. Remembering once being scared by an aggressive dog, we might pull back to a safe distance. A computer does none of this. For a computer, there is no meaningful difference between dog, tiger, and XyB3c, other than the fact that they use different symbols. A computer can count the number of times the word dog is used in a story and retrieve facts about dogs (such as how many legs they have), but computers do not understand words the way humans do, and will not respond to the word dog the way humans do. The lack of real world knowledge is often revealed in software that attempts to interpret words and images. Language translation software programs are designed to convert sentences written or spoken in one language into equivalent sentences in another language. In the 1950s, a Georgetown–IBM team demonstrated the machine translation of 60 sentences from Russian to English using a 250-word vocabulary and six grammatical rules. The lead scientist predicted that, with a larger vocabulary and more rules, translation programs would be perfected in three to five years. Little did he know! He had far too much faith in computers. It has now been more than 60 years and, while translation software is impressive, it is far from perfect. The stumbling blocks are instructive. Humans translate passages by thinking about the content—what the author means—and then expressing that content in another language.


Sign in / Sign up

Export Citation Format

Share Document