ontology extraction
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 14)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Xingsi Xue ◽  
Xiaojing Wu ◽  
Jie Zhang ◽  
Lingyu Zhang ◽  
Hai Zhu ◽  
...  

Aiming at enhancing the communication and information security between the next generation of Industrial Internet of Things (Nx-IIoT) sensor networks, it is critical to aggregate heterogeneous sensor data in the sensor ontologies by establishing semantic connections in diverse sensor ontologies. Sensor ontology matching technology is devoted to determining heterogeneous sensor concept pairs in two distinct sensor ontologies, which is an effective method of addressing the heterogeneity problem. The existing matching techniques neglect the relationships among different entity mapping, which makes them unable to make sure of the alignment’s high quality. To get rid of this shortcoming, in this work, a sensor ontology extraction method technology using Fuzzy Debate Mechanism (FDM) is proposed to aggregate the heterogeneous sensor data, which determines the final sensor concept correspondences by carrying out a debating process among different matchers. More than ever, a fuzzy similarity metric is presented to effectively measure two entities’ similarity values by membership function. It first uses the fuzzy membership function to model two entities’ similarity in vector space and then calculate their semantic distance with the cosine function. The testing cases from Bibliographic data which is furnished by the Ontology Alignment Evaluation Initiative (OAEI) and six sensor ontology matching tasks are used to evaluate the performance of our scheme in the experiment. The robustness and effectiveness of the proposed method are proved by comparing it with the advanced ontology matching techniques.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-26
Author(s):  
Min-Hua Chao ◽  
Amy J. C. Trappey ◽  
Chun-Ting Wu

Natural language processing (NLP) is a critical part of the digital transformation. NLP enables user-friendly interactions between machine and human by making computers understand human languages. Intelligent chatbot is an essential application of NLP to allow understanding of users’ utterance and responding in understandable sentences for specific applications simulating human-to-human conversations and interactions for problem solving or Q&As. This research studies emerging technologies for NLP-enabled intelligent chatbot development using a systematic patent analytic approach. Some intelligent text-mining techniques are applied, including document term frequency analysis for key terminology extractions, clustering method for identifying the subdomains, and Latent Dirichlet Allocation for finding the key topics of patent set. This research utilizes the Derwent Innovation database as the main source for global intelligent chatbot patent retrievals.


Author(s):  
Man Tianxing ◽  
Nataly Zhukova ◽  
Alexander Vodyaho ◽  
Tin Tun Aung

Extracting knowledge from data streams received from observed objects through data mining is required in various domains. However, there is a lack of any kind of guidance on which techniques can or should be used in which contexts. Meta mining technology can help build processes of data processing based on knowledge models taking into account the specific features of the objects. This paper proposes a meta mining ontology framework that allows selecting algorithms for solving specific data mining tasks and build suitable processes. The proposed ontology is constructed using existing ontologies and is extended with an ontology of data characteristics and task requirements. Different from the existing ontologies, the proposed ontology describes the overall data mining process, used to build data processing processes in various domains, and has low computational complexity compared to others. The authors developed an ontology merging method and a sub-ontology extraction method, which are implemented based on OWL API via extracting and integrating the relevant axioms.


2021 ◽  
Author(s):  
Yan Hu ◽  
Shujian Sun ◽  
Thomas Rowlands ◽  
Tim Beck ◽  
Joram Matthias Posma

Motivation: The availability of improved natural language processing (NLP) algorithms and models enable researchers to analyse larger corpora using open source tools. Text mining of biomedical literature is one area for which NLP has been used in recent years with large untapped potential. However, in order to generate corpora that can be analyzed using machine learning NLP algorithms, these need to be standardized. Summarizing data from literature to be stored into databases typically requires manual curation, especially for extracting data from result tables. Results: We present here an automated pipeline that cleans HTML files from biomedical literature. The output is a single JSON file that contains the text for each section, table data in machine-readable format and lists of phenotypes and abbreviations found in the article. We analyzed a total of 2,441 Open Access articles from PubMed Central, from both Genome-Wide and Metabolome-Wide Association Studies, and developed a model to standardize the section headers based on the Information Artifact Ontology. Extraction of table data was developed on PubMed articles and fine-tuned using the equivalent publisher versions. Availability: The Auto-CORPus package is freely available with detailed instructions from Github at https://github.com/jmp111/AutoCORPus/.


2021 ◽  
Author(s):  
Bernabé Batchakui ◽  
Emile Tawamba ◽  
Roger Nkambou
Keyword(s):  

Author(s):  
Pakkir Mohideen S.

This chapter illustrates novel methods to provide personalized and adaptive content to the learners. This chapter illustrates a new methodology of automatically constructing concept maps using ontology to measure the learners' understanding for a particular topic, thereby teachers can adopt adaptive teaching based on the learners knowledge structures as reflected in the concept maps. The teachers can dynamically revise and deliver instructional materials according to the learners' current progress. In the approach, the authors provide dynamic content to the learners based on neuro fuzzy domain ontology extraction algorithm. This method also provides a personalized ontology model of a learner to learn the ontological user profiles from both world knowledge base and user local instance repositories. The main quality of the innovative work is to mine the personalized ontology of the learners to extract their knowledge through ontology mining using Inc Span+ algorithm.


Author(s):  
Angelo A. Salatino ◽  
Francesco Osborne ◽  
Enrico Motta

Ontologies of research areas have been proven to be useful resources for analysing and making sense of scholarly data. In this chapter, we present the Computer Science Ontology (CSO), which is the largest ontology of research areas in the field, and discuss a number of applications that build on CSO to support high-level tasks, such as topic classification, metadata extraction, and recommendation of books.


Author(s):  
Alia El Bolock ◽  
Rania Nagy ◽  
Cornelia Herbert ◽  
Slim Abdennadher
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document