scholarly journals Approaches to assessing the semantic similarity of texts in a multilingual space

Author(s):  
Aida Hakimova ◽  
Michael Charnine ◽  
Aleksey Klokov ◽  
Evgenii Sokolov

This paper is devoted to the development of a methodology for evaluating the semantic similarity of any texts in different languages is developed. The study is based on the hypothesis that the proximity of vector representations of terms in semantic space can be interpreted as a semantic similarity in the cross-lingual environment. Each text will be associated with a vector in a single multilingual semantic vector space. The measure of the semantic similarity of texts will be determined by the measure of the proximity of the corresponding vectors. We propose a quantitative indicator called Index of Semantic Textual Similarity (ISTS) that measures the degree of semantic similarity of multilingual texts on the basis of identified cross-lingual semantic implicit links. The setting of parameters is based on the correlation with the presence of a formal reference between documents. The measure of semantic similarity expresses the existence of two common terms, phrases or word combinations. Optimal parameters of the algorithm for identifying implicit links are selected on the thematic collection by maximizing the correlation of explicit and implicit connections. The developed algorithm can facilitate the search for close documents in the analysis of multilingual patent documentation.

Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1372
Author(s):  
Sanjanasri JP ◽  
Vijay Krishna Menon ◽  
Soman KP ◽  
Rajendran S ◽  
Agnieszka Wolk

Linguists have been focused on a qualitative comparison of the semantics from different languages. Evaluation of the semantic interpretation among disparate language pairs like English and Tamil is an even more formidable task than for Slavic languages. The concept of word embedding in Natural Language Processing (NLP) has enabled a felicitous opportunity to quantify linguistic semantics. Multi-lingual tasks can be performed by projecting the word embeddings of one language onto the semantic space of the other. This research presents a suite of data-efficient deep learning approaches to deduce the transfer function from the embedding space of English to that of Tamil, deploying three popular embedding algorithms: Word2Vec, GloVe and FastText. A novel evaluation paradigm was devised for the generation of embeddings to assess their effectiveness, using the original embeddings as ground truths. Transferability across other target languages of the proposed model was assessed via pre-trained Word2Vec embeddings from Hindi and Chinese languages. We empirically prove that with a bilingual dictionary of a thousand words and a corresponding small monolingual target (Tamil) corpus, useful embeddings can be generated by transfer learning from a well-trained source (English) embedding. Furthermore, we demonstrate the usability of generated target embeddings in a few NLP use-case tasks, such as text summarization, part-of-speech (POS) tagging, and bilingual dictionary induction (BDI), bearing in mind that those are not the only possible applications.


2014 ◽  
Vol 14 (3) ◽  
pp. 25-36
Author(s):  
Bohdan Pavlyshenko

Abstract This paper describes the analysis of possible differentiation of the author’s idiolect in the space of semantic fields; it also analyzes the clustering of text documents in the vector space of semantic fields and in the semantic space with orthogonal basis. The analysis showed that using the vector space model on the basis of semantic fields is efficient in cluster analysis algorithms of author’s texts in English fiction. The study of the distribution of authors' texts in the cluster structure showed the presence of the areas of semantic space that represent the idiolects of individual authors. Such areas are described by the clusters where only one author dominates. The clusters, where the texts of several authors dominate, can be considered as areas of semantic similarity of author’s styles. SVD factorization of the semantic fields matrix makes it possible to reduce significantly the dimension of the semantic space in the cluster analysis of author’s texts. Using the clustering of the semantic field vector space can be efficient in a comparative analysis of author's styles and idiolects. The clusters of some authors' idiolects are semantically invariant and do not depend on any changes in the basis of the semantic space and clustering method.


2019 ◽  
Vol 27 (1) ◽  
pp. 47-60
Author(s):  
Roland Coghetto

Summary Using Mizar [1], in the context of a real vector space, we introduce the concept of affine ratio of three aligned points (see [5]). It is also equivalent to the notion of “Mesure algèbrique”1, to the opposite of the notion of Teilverhältnis2 or to the opposite of the ordered length-ratio [9]. In the second part, we introduce the classic notion of “cross-ratio” of 4 points aligned in a real vector space. Finally, we show that if the real vector space is the real line, the notion corresponds to the classical notion3 [9]: The cross-ratio of a quadruple of distinct points on the real line with coordinates x1, x2, x3, x4 is given by: $$({x_1},{x_2};{x_3},{x_4}) = {{{x_3} - {x_1}} \over {{x_3} - {x_2}}}.{{{x_4} - {x_2}} \over {{x_4} - {x_1}}}$$ In the Mizar Mathematical Library, the vector spaces were first defined by Kusak, Leonczuk and Muzalewski in the article [6], while the actual real vector space was defined by Trybulec [10] and the complex vector space was defined by Endou [4]. Nakasho and Shidama have developed a solution to explore the notions introduced by different authors4 [7]. The definitions can be directly linked in the HTMLized version of the Mizar library5. The study of the cross-ratio will continue within the framework of the Klein- Beltrami model [2], [3]. For a generalized cross-ratio, see Papadopoulos [8].


Author(s):  
Mikel Artetxe ◽  
Sebastian Ruder ◽  
Dani Yogatama
Keyword(s):  

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shaofei Wang ◽  
Depeng Dang

PurposePrevious knowledge base question answering (KBQA) models only consider the monolingual scenario and cannot be directly extended to the cross-lingual scenario, in which the language of questions and that of knowledge base (KB) are different. Although a machine translation (MT) model can bridge the gap through translating questions to the language of KB, the noises of translated questions could accumulate and further sharply impair the final performance. Therefore, the authors propose a method to improve the robustness of KBQA models in the cross-lingual scenario.Design/methodology/approachThe authors propose a knowledge distillation-based robustness enhancement (KDRE) method. Specifically, first a monolingual model (teacher) is trained by ground truth (GT) data. Then to imitate the practical noises, a noise-generating model is designed to inject two types of noise into questions: general noise and translation-aware noise. Finally, the noisy questions are input into the student model. Meanwhile, the student model is jointly trained by GT data and distilled data, which are derived from the teacher when feeding GT questions.FindingsThe experimental results demonstrate that KDRE can improve the performance of models in the cross-lingual scenario. The performance of each module in KBQA model is improved by KDRE. The knowledge distillation (KD) and noise-generating model in the method can complementarily boost the robustness of models.Originality/valueThe authors first extend KBQA models from monolingual to cross-lingual scenario. Also, the authors first implement KD for KBQA to develop robust cross-lingual models.


Author(s):  
Julien Hay ◽  
Tim Van de Cruys ◽  
Philippe Muller ◽  
Bich-Liên Doan ◽  
Fabrice Popineau ◽  
...  

Author(s):  
Blaž Fortuna ◽  
Nello Cristianini ◽  
John Shawe-Taylor

We present a general method using kernel canonical correlation analysis (KCCA) to learn a semantic of text from an aligned multilingual collection of text documents. The semantic space provides a language-independent representation of text and enables a comparison between the text documents from different languages. In experiments, we apply the KCCA to the cross-lingual retrieval of text documents, where the text query is written in only one language, and to cross-lingual text categorization, where we trained a cross-lingual classifier.


BMC Genomics ◽  
2019 ◽  
Vol 20 (S9) ◽  
Author(s):  
Xiaoshi Zhong ◽  
Rama Kaalia ◽  
Jagath C. Rajapakse

Abstract Background Semantic similarity between Gene Ontology (GO) terms is a fundamental measure for many bioinformatics applications, such as determining functional similarity between genes or proteins. Most previous research exploited information content to estimate the semantic similarity between GO terms; recently some research exploited word embeddings to learn vector representations for GO terms from a large-scale corpus. In this paper, we proposed a novel method, named GO2Vec, that exploits graph embeddings to learn vector representations for GO terms from GO graph. GO2Vec combines the information from both GO graph and GO annotations, and its learned vectors can be applied to a variety of bioinformatics applications, such as calculating functional similarity between proteins and predicting protein-protein interactions. Results We conducted two kinds of experiments to evaluate the quality of GO2Vec: (1) functional similarity between proteins on the Collaborative Evaluation of GO-based Semantic Similarity Measures (CESSM) dataset and (2) prediction of protein-protein interactions on the Yeast and Human datasets from the STRING database. Experimental results demonstrate the effectiveness of GO2Vec over the information content-based measures and the word embedding-based measures. Conclusion Our experimental results demonstrate the effectiveness of using graph embeddings to learn vector representations from undirected GO and GOA graphs. Our results also demonstrate that GO annotations provide useful information for computing the similarity between GO terms and between proteins.


Sign in / Sign up

Export Citation Format

Share Document