The conceptual basis of ablativity

2019 ◽  
Vol 17 (2) ◽  
pp. 511-530
Author(s):  
Héctor Hernández Arocha ◽  
Elia Hernández Socas

Abstract The aim of the present paper is to define the notion of ablativity in terms of its event structure. To achieve this goal, the authors discuss the contribution of several semantic theories dealing with that conceptual class to finally propose their own definition, extensively based on the cognitive frame-based model by Wotjak (2006; 2011a; 2016). Even though the survey is mainly concerned with the theoretically relevant aspects of that semantic class, many of them are also illustrated with examples from different languages, especially from Latin, Spanish and German. Finally, the authors explain how ablativity conceptually relates to other semantic classes, such as concomitance or possession.

2020 ◽  
Vol 25 (1) ◽  
pp. 101-123
Author(s):  
Dirk Speelman ◽  
Stefan Grondelaers ◽  
Benedikt Szmrecsanyi ◽  
Kris Heylen

Abstract In this paper, we revisit earlier analyses of the distribution of er ‘there’ in adjunct-initial sentences to demonstrate the merits of computational upscaling in syntactic variation research. Contrary to previous studies, in which major semantic and pragmatic predictors (viz. adjunct type, adjunct concreteness, and verb specificity) had to be coded manually, the present study operationalizes these predictors on the basis of distributional analysis: instead of hand-coding for specific semantic classes, we determine the semantic class of the adjunct, verb, and subject automatically by clustering the lexemes in those slots on the basis of their ‘semantic passport’ (as established on the basis of their distributional behaviour in a reference corpus). These clusters are subsequently interpreted as proxies for semantic classes. In addition, the pragmatic factor ‘subject predictability’ is operationalized automatically on the basis of collocational attraction measures, as well as distributional similarity between the other slots and the subject. We demonstrate that the distribution of er can be modelled equally successfully with the automated approach as in manual annotation-based studies. Crucially, the new method replicates our earlier findings that the Netherlandic data are easier to model than the Belgian data, and that lexical collocations play a bigger role in the Netherlandic than in the Belgian data. On a methodological level, the proposed automatization opens up a window of opportunities. Most important is its scalability: it allows for a larger gamut of alternations that can be investigated in one study, and for much larger datasets to represent each alternation.


2015 ◽  
Vol 54 ◽  
pp. 83-122 ◽  
Author(s):  
Ruben Izquierdo ◽  
Armando Suarez ◽  
German Rigau

As empirically demonstrated by the Word Sense Disambiguation (WSD) tasks of the last SensEval/SemEval exercises, assigning the appropriate meaning to words in context has resisted all attempts to be successfully addressed. Many authors argue that one possible reason could be the use of inappropriate sets of word meanings. In particular, WordNet has been used as a de-facto standard repository of word meanings in most of these tasks. Thus, instead of using the word senses defined in WordNet, some approaches have derived semantic classes representing groups of word senses. However, the meanings represented by WordNet have been only used for WSD at a very fine-grained sense level or at a very coarse-grained semantic class level (also called SuperSenses). We suspect that an appropriate level of abstraction could be on between both levels. The contributions of this paper are manifold. First, we propose a simple method to automatically derive semantic classes at intermediate levels of abstraction covering all nominal and verbal WordNet meanings. Second, we empirically demonstrate that our automatically derived semantic classes outperform classical approaches based on word senses and more coarse-grained sense groupings. Third, we also demonstrate that our supervised WSD system benefits from using these new semantic classes as additional semantic features while reducing the amount of training examples. Finally, we also demonstrate the robustness of our supervised semantic class-based WSD system when tested on out of domain corpus.


1999 ◽  
Vol 5 (2) ◽  
pp. 147-156 ◽  
Author(s):  
ELLEN RILOFF ◽  
JESSICA SHEPHERD

Many applications need a lexicon that represents semantic information but acquiring lexical information is time consuming. We present a corpus-based bootstrapping algorithm that assists users in creating domain-specific semantic lexicons quickly. Our algorithm uses a representative text corpus for the domain and a small set of ‘seed words’ that belong to a semantic class of interest. The algorithm hypothesizes new words that are also likely to belong to the semantic class because they occur in the same contexts as the seed words. The best hypotheses are added to the seed word list dynamically, and the process iterates in a bootstrapping fashion. When the bootstrapping process halts, a ranked list of hypothesized category words is presented to a user for review. We used this algorithm to generate a semantic lexicon for eleven semantic classes associated with the MUC-4 terrorism domain.


Author(s):  
Mamadaliev Ahmadali ◽  
Karimova Nodirakhon Abdurashidovna

In this article, verbal lexemes are classified according to the nomination of the activity of nouns. Consequently, they are called upon to denote what is the “characteristic activity” of nouns of specific semantic classes, semantic thematic series and individual lexemes, as well as to the principle of generalization of different semantic classes, a group of thematic series, which is proved on specific verb examples and it is necessary to conclude that verbs can be divided into verbs of narrow and wide nominations, Depending on the semantic structure of verbs, their direct and figurative meanings differ, Often the potential seme of a verb is a concretizer and indicates the semantic class, groups and thematic series of nouns, and thus the verb actualizes its meaning in speech.The starting point of this work is the fact that "there are no objects without properties and relations and properties and relations without objects", therefore, verbs as well as nouns can be subjected to such classifications as nouns, where nouns of being, abstractness, concreteness, animate, inanimate are distinguished, anthroponymy, faunonymy, as well as certain semantic groups, thematic series and at the level of individual lexemes, as indicated by specific examples.Thus, we have to conclude that the verb is designed in the language to designate the characteristic activity of certain nouns, combining with it in speech its actual meaning is revealed and thereby determines its relevance to a particular semantic class, semantic groups or thematic series, and thus the verbs of a narrow and a wide nomination from a wide nomination. Depending on the semantic structure of the verbs, their direct and figurative meanings differ. Often a potential seme of a verb is a concretizer and indicates the semantic class, groups and thematic series of nouns, and thus the verb actualizes its meaning in speech.


Author(s):  
Yun Niu ◽  
Graeme Hirst

The task of question answering (QA) is to find an accurate and precise answer to a natural language question in some predefined text. Most existing QA systems handle fact-based questions that usually take named entities as the answers. In this chapter, the authors take clinical QA as an example to deal with more complex information needs. They propose an approach using Semantic class analysis as the organizing principle to answer clinical questions. They investigate three Semantic classes that correspond to roles in the commonly accepted PICO format of describing clinical scenarios. The three Semantic classes are: the description of the patient (or the problem), the intervention used to treat the problem, and the clinical outcome. The authors focus on automatic analysis of two important properties of the Semantic classes.


2021 ◽  
Vol 6 (1) ◽  
pp. 54
Author(s):  
Dorottya Demszky

Hungarian is often referred to as a discourse-configurational language, since the structural position of constituents is determined by their logical function (topic or comment) rather than their grammatical function (e.g., subject or object). We build on work by Komlósy (1989) and argue that in addition to discourse context, the lexical semantics of the verb also plays a significant role in determining Hungarian word order. In order to investigate the role of lexical semantics in determining Hungarian word order, we conduct a large-scale, data-driven analysis on the ordering of 380 transitive verbs and their objects, as observed in hundreds of thousands of examples extracted from the Hungarian Gigaword Corpus. We test the effect of lexical semantics on the ordering of verbs and their objects by grouping verbs into 11 semantic classes. In addition to the semantic class of the verb, we also include two control features related to information structure, object definiteness and object NP weight, chosen to allow a comparison of their effect size to that of verb semantics. Our results suggest that all three features have a significant effect on verb-object ordering in Hungarian and among these features, the semantic class of the verb has the largest effect. Specifically, we find that stative verbs, such as fed 'cover', jelent 'mean' and övez 'surround', tend to be OV-preferring (with the exception of psych verbs which are strongly VO-preferring) and non-stative verbs, such as bírál 'judge', csökkent 'reduce' and csókol 'kiss', verbs tend to be VO-preferring. These findings support our hypothesis that lexical semantic factors influence word order in Hungarian.


2017 ◽  
Vol 22 (4) ◽  
pp. 521-550 ◽  
Author(s):  
Emiel van den Hoven ◽  
Evelyn C. Ferstl

Abstract Given a sentence such as Mary fascinated/admired Sue because she did great, the verb fascinated leads people to interpret she as referring to Mary, whereas admired leads people to interpret she as referring to Sue. This phenomenon is known as implicit causality (IC). Recent studies have shown that verbs’ causality biases closely correspond to the verbs’ semantic classes, as classified in VerbNet, a lexicon that groups verbs into classes on the basis of syntactic behavior. The current study further investigates the relationship between causality biases and semantic classes. Using corpus data we show that the collostruction strength between verbs and the syntactic constructions that VerbNet classes are based on can be a good predictor of causality bias. This result suggests that the relation between semantic class and causality bias is not a categorical matter; more typical members of the semantic class show a stronger causality bias than less typical members.


2017 ◽  
Vol 73 (3-4) ◽  
pp. 7-28
Author(s):  
Aleksander Kiklewicz

The subject of this article is subcategorization of Russian mental verbs (verbs of knowledge, understanding and thinking) considering information encoded in their form and structure about the mental (intellectual) functions and time schemata, i.e. taking into account such characteristics as continuity and limitations. The author refers to the extensive literature on this subject, above all, the publications of Russian researchers, such as L. M. Vasiliev, G. A. Zolotova, N. J. Shvedova et al., and also presents an analysis of the collected empirical material (543 units of modern Russian language). The conclusions concern the division of verbs into semantic classes as well as the degree of representation of each class.


2020 ◽  
pp. 49-59

The article deals with semantic classes of verbs with different valences. Based on the interpretation of the semantic class of verbs, in the process of teaching the Russian language, semantic representations are used that are the result of the segmentation of the gestalt interpretation, so that the prepositional-case forms and verbs align the semantic in variants - the frame. From the point of view of semantics, the first actant is the subject of action, i.e. the one who performs the action; the second actant is an object that is impacted by a direct action object, a direct object; the third actant is an indirect or further object to whose benefit or to the detriment of which an indirect complement is performed. The semantic side of a sentence assumes its surface form, i.e. the sentence is considered as a sign that has a plan of content and a plan of expression. Both plans are closely related, but a comprehensive approach is needed to clarify the specifics of the organization of each of them and establish their relationship. L. Tenier also emphasized that trivalent verbs, the most structurally complex and the most difficult to master, fall out of the researcher's field of vision. Therefore, the selection of semantic classes of such verbs is of great theoretical and practical importance. The semantics of a verb includes the verb and subject features. The latter are the semantic basis of the valency of the verb. It can be assumed that as part of the semantic interpretation of the distributive-transformational scheme, the correlation between frames and the minimal interpretation determines the case value of each of the actants. To determine the meaning of the case, it is sufficient and necessary to refer the word of this case form to the distributional transformational feature and name the frame corresponding to the segment of interpretation of this noun.


Author(s):  
Zhiping Shi ◽  
Qingyong Li ◽  
Qing He ◽  
Zhongzhi Shi

Semantics-based retrieval is a trend of the Content-Based Multimedia Retrieval (CBMR). Typically, in multimedia databases, there exist two kinds of clues for query: perceptive features and semantic classes. In this chapter, we proposed a novel framework for multimedia database organization and retrieval, integrating the perceptive features and semantic classes. Thereunto, a semantics supervised cluster-based index organization approach (briefly as SSCI) was developed: the entire data set is divided hierarchically into many clusters until the objects within a cluster are not only close in the perceptive feature space, but also within the same semantic class; then an index entry is built for each cluster. Especially, the perceptive feature vectors in a cluster are organized adjacently in disk. Furthermore, the SSCI supports a relevance feedback approach: users sign the positive and negative examples regarded a cluster as unit rather than a single object. Our experiments show that the proposed framework can improve the retrieval speed and precision of the CBMR systems significantly.


Sign in / Sign up

Export Citation Format

Share Document