Finding the Semantic Relationship Between Wikipedia Articles Based on a Useful Entry Relationship

2020 ◽  
pp. 838-859
Author(s):  
Lin-Chih Chen

Wikipedia is the largest online Internet encyclopedia, and everyone can create and edit different articles. On the one hand, because it contains huge amounts of articles and there are many different language versions, it often faces synonymous and polysemy problems. On the other hand, since some of the similar Wikipedia articles may have the same topic of discussion, it needs a suitable way to identify effectively the semantic relationships between articles. This paper first uses three well-known semantic analysis models LSA, PLSA, and LDA as evaluation benchmarks. Then, it uses the entry relationship between Wikipedia articles to design its model. According to the experimental results and analysis, its model has high performance and low cost characteristics compared with other models. The advantages of its model are as follows: (1) it is a good model for finding the semantic relationships between Wikipedia articles; (2) it is suitable for dealing with huge amounts of documentation.

2017 ◽  
Vol 13 (4) ◽  
pp. 33-52 ◽  
Author(s):  
Lin-Chih Chen

Wikipedia is the largest online Internet encyclopedia, and everyone can create and edit different articles. On the one hand, because it contains huge amounts of articles and there are many different language versions, it often faces synonymous and polysemy problems. On the other hand, since some of the similar Wikipedia articles may have the same topic of discussion, it needs a suitable way to identify effectively the semantic relationships between articles. This paper first uses three well-known semantic analysis models LSA, PLSA, and LDA as evaluation benchmarks. Then, it uses the entry relationship between Wikipedia articles to design its model. According to the experimental results and analysis, its model has high performance and low cost characteristics compared with other models. The advantages of its model are as follows: (1) it is a good model for finding the semantic relationships between Wikipedia articles; (2) it is suitable for dealing with huge amounts of documentation.


1948 ◽  
Vol 21 (4) ◽  
pp. 853-859
Author(s):  
R. F. A. Altman

Abstract As numerous investigators have shown, some of the nonrubber components of Hevea latex have a decided accelerating action on the process of vulcanization. A survey of the literature on this subject points to the validity of certain general facts. 1. Among the nonrubber components of latex which have been investigated, certain nitrogenous bases appear to be most important for accelerating the rate of vulcanization. 2. These nitrogen bases apparently occur partly naturally in fresh latex, and partly as the result of putrefaction, heating, and other decomposition processes. 3. The nitrogen bases naturally present in fresh latex at later stages have been identified by Altman to be trigonelline, stachhydrine, betonicine, choline, methylamine, trimethylamine, and ammonia. These bases are markedly active in vulcanization, as will be seen in the section on experimental results. 4. The nitrogenous substances formed by the decomposition processes have only partly been identified, on the one hand as tetra- and pentamethylene diamine and some amino acids, on the other hand as alkaloids, proline, diamino acids, etc. 5. It has been generally accepted that these nitrogenous substances are derived from the proteins of the latex. 6. Decomposition appears to be connected with the formation of a considerable amount of acids. 7. The production of volatile nitrogen bases as a rule accompanies the decomposition processes. These volatile products have not been identified. 8. The active nitrogen bases, either already formed or derived from complex nitrogenous substances, seem to be soluble in water but only slightly soluble in acetone.


2020 ◽  
pp. 119-133
Author(s):  
Beata Kuryłowicz ◽  

This article is an attempt to perform a semantic analysis of anatomical vocabulary collected by Michał Abraham Troc in Nowy dykcjonarz, published in Lipsk in 1764. The aim of individual analyses based on the lexical field theory is to demonstrate the meaning of lexemes, to determine their place within a field, as well as to disclose semantic relationships: synonymy, polysemy and hyponymy. The semantic analysis presented in this article clearly demonstrates abundance and differentiation of 18th century anatomical vocabulary, as well as prevalence of native over borrowed words. Among 250 names, only eleven units are borrowings from foreign languages: seven Latin and four German ones. This provides evidence there is a fundamental role of native lexis, especially colloquial vocabulary, in the formation of Polish anatomical terminology, and, more extensively, also medical terminology, in the first phase of its development which continued until the end of the 18th century. Of note is also the non-uniform arrangement of lexemes in individual fields and asymmetry in their number. Selected lexical fields are characterised by non-uniform size, different level of semantic stratification and differentiated degree of generality of words they contain. On the other hand, semantic relations observed in the analysed anatomical vocabulary, especially synonymy and polysemy, confirm there is a differentiation of anatomical lexis, on the other hand, they indicate lack of precision in expressing content by the discussed lexical units.


2015 ◽  
Vol 4 (2) ◽  
pp. 103
Author(s):  
Axelle Vatrican

<p><span style="font-family: Times New Roman;">Abstract. This paper presents a semantic analysis of a periphrastic construction which has not been studied at this time in Spanish: <em>soler </em>+ stative (<em>Un poeta suele ser un hombre normal, “A poet usually is a normal man</em>”). Whereas the habitual construction has been largely studied (<em>Juan suele cantar “Juan usually sings”</em>), it seems that the first one does not carry the same interpretation. We will claim that we need to distinguish between two readings: the habitual reading on the one hand and the generic reading on the other hand. According to Menéndez-Benito (2013), Krifka et al. (1995) and Shubert &amp; Pelletier (1989), among others, we will argue that <em>soler </em>contains a frequentative adverb of quantification <em>Q</em>. In the habitual reading, the <em>Q</em> adverb quantifies over an individual participating in an event at a time t (<em>Juan está cantando</em>, <em>“Juan is singing”</em>), whereas in the generic reading, <em>Q</em> adverb quantifies over a characterizing predicate (<em>un poeta es un hombre normal, “A poet is a normal man”</em>). In the habitual reading, the NP must refer to an individual and the VP to a dynamic event anchored in space and time. In the generic reading, the NP must refer to a class of objects and the VP to a stative predicate.</span></p>


2016 ◽  
Vol 7 (1) ◽  
pp. 1-42 ◽  
Author(s):  
Esa Itkonen

Common claims within cognitive semantics (e.g. Johnson 1987; Lakoff 1987; Langacker 1987) are that “the most fundamental issue in linguistic theory is the nature of meaning” and “meaning is a matter of conceptualization”. But the latter claim creates a problem. On the one hand, for many cognitive semanticists conceptualization takes place under the level of consciousness. On the other hand, semantic analysis is carried out on the level of consciousness, namely by means of (conscious) intuition-cum-introspection. What is, then, meaning? As Wittgenstein argues, meaning is use, understood as a web of intersubjective norms, comparable to rules of a game and accessible to conscious intuition. In this article I elaborate on this claim, and thus offer critique to those who equate linguistic meaning with conceptualizations understood as private mental representations. Furthermore, I argue that the non-causal study of norms (langue) must be kept separate from the causal study of (norm-following or norm-breaking) behaviour (parole). Because of its variationist nature, linguistic behaviour demands statistical explanation.


2021 ◽  
Vol 52 (4) ◽  
pp. 204-226
Author(s):  
A.P. Zaostrovtsev ◽  
◽  
V.V. Matveev ◽  

The article examines the evolution of the analysis of voters’ behavior when searching for an answer to the question: Why does the a voter vote? It is shown how the approach to the voter as a rational egoistic investor gave rise to what is commonly called the “voter’s paradox” in political and economic theory. Further search was aimed at explaining this paradox. On the one hand, the concept of an expressive voter appears, who expresses himself through participation in elections, on the other hand, we are talking about an altruistic voter who overcomes egoism. The latest theoretical finding was the explanation of participation in voting by attracting “relational goods” that differ in their qualities from both public and private goods. With this approach, the “voter’s paradox” finds the most consistent solution. And it is in this approach the shift from methodological individualism to institutional individualism is most clearly manifested. The authors of the article highlight this shift as a new trend in explaining the reasons for voting. At the same time, it is argued that the considered conceptual diversity is a reflection of the multidimensional features of human nature, and it is this fact that gives rise to the ambiguity and contradiction of experimental results.


2015 ◽  
Vol 4 (1) ◽  
pp. 31 ◽  
Author(s):  
Raquel González Rodríguez

This paper focuses on resultative and progressive periphrases in Spanish: &lt;<em>estar</em> ‘to be’ + participle&gt; and &lt;<em>estar</em> ‘to be’ + gerund&gt;, respectively. These periphrases have been associated with several negated constructions. On the one hand, the negative particle <em>no</em> ‘not’ can precede the auxiliary verb (&lt;<em>no estar</em> ‘not to be’ + participle&gt; and &lt;<em>no estar</em> ‘not to be’ + gerund&gt;); on the other hand, we have the structure &lt;<em>estar sin</em> ‘to be without’ + infinitive&gt;. Contrary to what has been suggested in the literature, I will show that these negative constructions have a different interpretation and develop a semantic analysis of them. Furthermore, I will offer new evidence in favor of the existence of negative events.


1970 ◽  
Vol 14 ◽  
pp. 102-126 ◽  
Author(s):  
Frank L. Chan ◽  
W. Barclay Jones

AbstractAn x-ray spectrometer with experimental results is herewith described using a radiosotope source Fe55 having a halflife of 2.6 years. As a result of the disintegration, the managanese x-rays are capable of exciting fluorescent x-rays of such elements as sulfur, chlorine, potassium, calcium, scandium and titanium in aqueous solutions. These elements with the Ka wavelengths ranging from 5.3729 Å to 2.7496 Å may be designated as between the very soft x-rays on the one hand and the hard x-rays on the other. The x-ray spectrometer presently described has achieved a resolution of 136 ev, FWHM.Simultaneously, these elements have also been quantitatively determined by conventional x-ray fluorescent spectrometers. Since one of the spectrometers is designed to operate in vacuum as well as in helium or air, determination of sulfur, potassium and calcium were carried out in vacuum. Determination of chlorine was carried out in a helium atmosphere, Calcium, scandium and titanium were determined in air with an air-path spectrometer.In the present study aqueous solutions containing these elements were used. The use of aqueous solutions has the inherent advantages of being homogeneous and free from effect of particle size.


Algorithms ◽  
2019 ◽  
Vol 13 (1) ◽  
pp. 14
Author(s):  
Jianjian Ji ◽  
Gang Yang

Existing image completion methods are mostly based on missing regions that are small or located in the middle of the images. When regions to be completed are large or near the edge of the images, due to the lack of context information, the completion results tend to be blurred or distorted, and there will be a large blank area in the final results. In addition, the unstable training of the generative adversarial network is also prone to cause pseudo-color in the completion results. Aiming at the two above-mentioned problems, a method of image completion with large or edge-missing areas is proposed; also, the network structures have been improved. On the one hand, it overcomes the problem of lacking context information, which thereby ensures the reality of generated texture details; on the other hand, it suppresses the generation of pseudo-color, which guarantees the consistency of the whole image both in vision and content. The experimental results show that the proposed method achieves better completion results in completing large or edge-missing areas.


Author(s):  
Shizhu He ◽  
Kang Liu ◽  
Weiting An

Customers ask questions, and customer service staffs answer those questions. It is the basic service manner of customer service (CS). The progress of CS is a typical multi-round conversation. However, there are no explicit corresponding relations among conversational utterances. This paper focuses on obtaining explicit alignments of question and answer utterances in CS. It not only is an important task of dialogue analysis, but also able to obtain lots of valuable train data for learning dialogue systems. In this work, we propose end-to-end models for aligning question (Q) and answer (A) utterances in CS conversation with recurrent pointer networks (RPN). On the one hand, RPN-based alignment models are able to model the conversational contexts and the mutual influence of different Q-A alignments. On the other hand, they are able to address the issue of empty and multiple alignments for some utterances in a unified manner. We construct a dataset from an in-house online CS. The experimental results demonstrate that the proposed models are effective to learn the alignments of question and answer utterances.


Sign in / Sign up

Export Citation Format

Share Document