A New Model to Compute Semantic Similarity from Multi-ontology

Author(s):  
Lan Wang ◽  
Ming Chen
2021 ◽  
Author(s):  
Abdul Wahab ◽  
Rafet Sifa

<div> <div> <div> <p> </p><div> <div> <div> <p>In this paper, we propose a new model named DIBERT which stands for Dependency Injected Bidirectional Encoder Representations from Transformers. DIBERT is a variation of the BERT and has an additional third objective called Parent Prediction (PP) apart from Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). PP injects the syntactic structure of a dependency tree while pre-training the DIBERT which generates syntax-aware generic representations. We use the WikiText-103 benchmark dataset to pre-train both BERT- Base and DIBERT. After fine-tuning, we observe that DIBERT performs better than BERT-Base on various downstream tasks including Semantic Similarity, Natural Language Inference and Sentiment Analysis. </p> </div> </div> </div> </div> </div> </div>


2021 ◽  
Author(s):  
Abdul Wahab ◽  
Rafet Sifa

<div> <div> <div> <p> </p><div> <div> <div> <p>In this paper, we propose a new model named DIBERT which stands for Dependency Injected Bidirectional Encoder Representations from Transformers. DIBERT is a variation of the BERT and has an additional third objective called Parent Prediction (PP) apart from Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). PP injects the syntactic structure of a dependency tree while pre-training the DIBERT which generates syntax-aware generic representations. We use the WikiText-103 benchmark dataset to pre-train both BERT- Base and DIBERT. After fine-tuning, we observe that DIBERT performs better than BERT-Base on various downstream tasks including Semantic Similarity, Natural Language Inference and Sentiment Analysis. </p> </div> </div> </div> </div> </div> </div>


2012 ◽  
Vol 605-607 ◽  
pp. 2351-2357 ◽  
Author(s):  
Shi Yang Deng ◽  
Yu Yue Du

A new model is proposed based on logic Petri nets for web service clusters. The service parameters of a cluster are unified and logical expressions are introduced to represent the indeterminacy of service clusters. A service cluster can be used as an integrated unit and parameter matching can be decided based on semantic similarity and logical inference in a service cluster. Some algorithms of web service discovery and composition are given based on logic Petri nets, and the redundant services are removed from service compositions by a backward method. The availability and efficiency of the proposed approach are illustrated by the experiments with large repositories of different sizes.


Author(s):  
David Sánchez ◽  
Montserrat Batet

The Information Content (IC) of a concept quantifies the amount of information it provides when appearing in a context. In the past, IC used to be computed as a function of concept appearance probabilities in corpora, but corpora-dependency and data sparseness hampered results. Recently, some other authors tried to overcome previous approaches, estimating IC from the knowledge modeled in an ontology. In this paper, the authors develop this idea, by proposing a new model to compute the IC of a concept exploiting the taxonomic knowledge modeled in an ontology. In comparison with related works, their proposal aims to better capture semantic evidences found in the ontology. To test the authors’ approach, they have applied it to well-known semantic similarity measures, which were evaluated using standard benchmarks. Results show that the use of the authors’ model produces, in most cases, more accurate similarity estimations than related works.


2021 ◽  
pp. 1-12
Author(s):  
Fuqiang Zhao ◽  
Zhengyu Zhu ◽  
Ping Han

To measure semantic similarity between words, a novel model DFRVec that encodes multiple semantic information of a word in WordNet into a vector space is presented in this paper. Firstly, three different sub-models are proposed: 1) DefVec: encoding the definitions of a word in WordNet; 2) FormVec: encoding the part-of-speech (POS) of a word in WordNet; 3) RelVec: encoding the relations of a word in WordNet. Then by combining the three sub-models with an existing word embedding, the new model for generating the vector of a word is proposed. Finally, based on DFRVec and the path information in WordNet, a new method DFRVec+Path to measure semantic similarity between words is presented. The experiments on ten benchmark datasets show that DFRVec+Path can outperform many existing methods on semantic similarity measurement.


2021 ◽  
Author(s):  
Abdul Wahab ◽  
Rafet Sifa

<div> <div> <div> <p> </p><div> <div> <div> <p>In this paper, we propose a new model named DIBERT which stands for Dependency Injected Bidirectional Encoder Representations from Transformers. DIBERT is a variation of the BERT and has an additional third objective called Parent Prediction (PP) apart from Masked Language Modeling (MLM) and Next Sentence Prediction (NSP). PP injects the syntactic structure of a dependency tree while pre-training the DIBERT which generates syntax-aware generic representations. We use the WikiText-103 benchmark dataset to pre-train both BERT- Base and DIBERT. After fine-tuning, we observe that DIBERT performs better than BERT-Base on various downstream tasks including Semantic Similarity, Natural Language Inference and Sentiment Analysis. </p> </div> </div> </div> </div> </div> </div>


Author(s):  
H. Akabori ◽  
K. Nishiwaki ◽  
K. Yoneta

By improving the predecessor Model HS- 7 electron microscope for the purpose of easier operation, we have recently completed new Model HS-8 electron microscope featuring higher performance and ease of operation.


Sign in / Sign up

Export Citation Format

Share Document