Applying Semantic Similarity Measures Based on Information Content in the Evaluation of a Domain Ontology

Author(s):  
Aimee Cecilia Hernandez Garcia ◽  
Mireya Tovar Vidal ◽  
Jose de Jesus Lavalle Martinez
2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Gaston K. Mazandu ◽  
Nicola J. Mulder

Several approaches have been proposed for computing term information content (IC) and semantic similarity scores within the gene ontology (GO) directed acyclic graph (DAG). These approaches contributed to improving protein analyses at the functional level. Considering the recent proliferation of these approaches, a unified theory in a well-defined mathematical framework is necessary in order to provide a theoretical basis for validating these approaches. We review the existing IC-based ontological similarity approaches developed in the context of biomedical and bioinformatics fields to propose a general framework and unified description of all these measures. We have conducted an experimental evaluation to assess the impact of IC approaches, different normalization models, and correction factors on the performance of a functional similarity metric. Results reveal that considering only parents or only children of terms when assessing information content or semantic similarity scores negatively impacts the approach under consideration. This study produces a unified framework for current and future GO semantic similarity measures and provides theoretical basics for comparing different approaches. The experimental evaluation of different approaches based on different term information content models paves the way towards a solution to the issue of scoring a term’s specificity in the GO DAG.


Author(s):  
David Sánchez ◽  
Montserrat Batet

The Information Content (IC) of a concept quantifies the amount of information it provides when appearing in a context. In the past, IC used to be computed as a function of concept appearance probabilities in corpora, but corpora-dependency and data sparseness hampered results. Recently, some other authors tried to overcome previous approaches, estimating IC from the knowledge modeled in an ontology. In this paper, the authors develop this idea, by proposing a new model to compute the IC of a concept exploiting the taxonomic knowledge modeled in an ontology. In comparison with related works, their proposal aims to better capture semantic evidences found in the ontology. To test the authors’ approach, they have applied it to well-known semantic similarity measures, which were evaluated using standard benchmarks. Results show that the use of the authors’ model produces, in most cases, more accurate similarity estimations than related works.


2021 ◽  
Vol 177 ◽  
pp. 114922
Author(s):  
Mehdi Jabalameli ◽  
Mohammadali Nematbakhsh ◽  
Reza Ramezani

2021 ◽  
Vol 54 (2) ◽  
pp. 1-37
Author(s):  
Dhivya Chandrasekaran ◽  
Vijay Mago

Estimating the semantic similarity between text data is one of the challenging and open research problems in the field of Natural Language Processing (NLP). The versatility of natural language makes it difficult to define rule-based methods for determining semantic similarity measures. To address this issue, various semantic similarity methods have been proposed over the years. This survey article traces the evolution of such methods beginning from traditional NLP techniques such as kernel-based methods to the most recent research work on transformer-based models, categorizing them based on their underlying principles as knowledge-based, corpus-based, deep neural network–based methods, and hybrid methods. Discussing the strengths and weaknesses of each method, this survey provides a comprehensive view of existing systems in place for new researchers to experiment and develop innovative ideas to address the issue of semantic similarity.


2018 ◽  
Vol 14 (2) ◽  
pp. 16-36 ◽  
Author(s):  
Carlos Ramón Rangel ◽  
Junior Altamiranda ◽  
Mariela Cerrada ◽  
Jose Aguilar

The merging procedures of two ontologies are mostly related to the enrichment of one of the input ontologies, i.e. the knowledge of the aligned concepts from one ontology are copied into the other ontology. As a consequence, the resulting new ontology extends the original knowledge of the base ontology, but the unaligned concepts of the other ontology are not considered in the new extended ontology. On the other hand, there are experts-aided semi-automatic approaches to accomplish the task of including the knowledge that is left out from the resulting merged ontology and debugging the possible concept redundancy. With the aim of facing the posed necessity of including all the knowledge of the ontologies to be merged without redundancy, this article proposes an automatic approach for merging ontologies, which is based on semantic similarity measures and exhaustive searching along of the closest concepts. The authors' approach was compared to other merging algorithms, and good results are obtained in terms of completeness, relationships and properties, without creating redundancy.


2012 ◽  
Vol 263-266 ◽  
pp. 1588-1592
Author(s):  
Jiu Qing Li ◽  
Chi Zhang ◽  
Peng Zhou Zhang

To solve resource-tagging inefficiency and low-precision retrieval in special field, an analysis method of tag semantic relevancy based on controlled database was proposed. The characteristic of special field and building method for controlled database were discussed. Domain ontology correlation calculation method was used to get semantic correlation. The tag semantic similarity calculation method was developed for semantic similarity, and normalization was used to increase the similarity accuracy. With semantic correlation and similarity as parameters, the semantic relevancy in special field can be obtained. This method was used successfully in the special field of actual projects, improved resource-tagging and retrieval efficiency.


Sign in / Sign up

Export Citation Format

Share Document