The Structure of Temporality and Modality

2006 ◽  
Vol 6 ◽  
pp. 161-201 ◽  
Author(s):  
Jonny Butler

This paper offers a view of clause structure based on semantic interpretability, focusing on the structure and interpretation of temporal (tense, aspect) and modal elements. It proposes that modality has a unitary lexical semantics along the lines of Krater (1977 et seq), with different interpretations of modals deriving from the interaction of that semantics with the interpretation of the temporal elements in the structural context the modals are found. Different positions for modal interpretation are proposed, corresponding the the edges of phases (Chomsky 2001). Evidence for this view is put forward from various languages. The clause structure so derived is akin to the universal clausal hierarchy proposed by Cinque (1999), lending support to the notion that something like this hierarchy does indeed hold in natural language, though the justification for it is very different.

Author(s):  
Branislava Dilparić ◽  
Nina Perović

The aim of this study was to determine whether some of the approaches of lexical semantics for studying word meaning could be identified in word2vec and recurrent neural networks (RNN), the algorithms for natural language processing (NLP). Linguistic concepts from the field of lexical semantics were decompositional, holistic, and relational. Although it is assumed that algorithms for natural language processing cannot be written only on the basis of mathematical knowledge, but also linguistic, this analysis was carried out so as to determine the exact models used in the above-mentioned algorithms. First, the aforementioned linguistic models were concisely explained through descriptive research. In describing those, the authors of the paper referred to the studies of the most prominent linguists within the field of lexical semantics such as Fillmore, Firth and Lyons. Then, in a similar fashion, the architecture of NLP algorithms was introduced and described. For this intent, the authors of the paper relied mostly on the studies of Mikolov et al., who designed word2vec. Next, a comparative analysis was conducted between approaches of lexical semantics on one hand and the algorithms in question on the other. This analysis confirmed the underlying assumption of the paper that the characteristics of decompositional and relational approach to studying word meaning could be recognized in word2vec and that the properties of the holistic model could be observed in RNN algorithms. More than that, the analysis showed a considerable overlap in processing natural language, as if the models of lexical semantics were taken and mathematically implemented in the algorithms examined. It should be emphasized that for the purposes of this paper only the basic principles of lexical semantics and NLP algorithms were taken into account. The aim of the paper was not to describe edge cases or to talk in detail about the mechanisms of these structures and their advantages and disadvantages. Essentially, this study sought to examine whether the basic ideas and characteristics of lexical semantics could be found in the architecture of the above-noted algorithms.


This volume explores the extremely rich diversity found under the “modal umbrella” in natural language. Offering a cross-linguistic perspective on the encoding of modal meanings that draws on novel data from an extensive set of languages, the book supports a view according to which modality infuses a much more extensive number of syntactic categories and levels of syntactic structure than has traditionally been thought. The volume distinguishes between “low modality,” which concerns modal interpretations that associate with the verbal and nominal cartographies in syntax, “middle modality” or modal interpretation associated to the syntactic cartography internal to the clause, and “high modality” that relates to the cartography known as the left periphery. By offering enticing combinations of cross-linguistic discussions of the more studied sources of modality together with novel or unexpected sources of modality, the volume presents specific case studies that show how meanings associated with low, middle, and high modality crystallize across a large variety of languages. The chapters on low modality explore modal meanings in structures that lack the complexity of full clauses, including conditional readings in noun phrases and modal features in lexical verbs. The chapters on middle modality examine the effects of tense and aspect on constructions with counterfactual readings, and on those that contain canonical modal verbs. The chapters on high modality are dedicated to constructions with imperative, evidential, and epistemic readings, examining, and at times challenging, traditional perspectives that syntactically associate these interpretations with the left periphery of the clause.


2021 ◽  
Vol 2 (6) ◽  
Author(s):  
Fausto Giunchiglia ◽  
Luca Erculiani ◽  
Andrea Passerini

AbstractLexical Semantics is concerned with how words encode mental representations of the world, i.e., concepts. We call this type of concepts, classification concepts. In this paper, we focus on Visual Semantics, namely, on how humans build concepts representing what they perceive visually. We call this second type of concepts, substance concepts. As shown in the paper, these two types of concepts are different and, furthermore, the mapping between them is many-to-many. In this paper we provide a theory and an algorithm for how to build substance concepts which are in a one-to-one correspondence with classifications concepts, thus paving the way to the seamless integration between natural language descriptions and visual perception. This work builds upon three main intuitions: (i) substance concepts are modeled as visual objects, namely, sequences of similar frames, as perceived in multiple encounters; (ii) substance concepts are organized into a visual subsumption hierarchy based on the notions of and ; (iii) the human feedback is exploited not to name objects, but, rather, to align the hierarchy of substance concepts with that of classification concepts. The learning algorithm is implemented for the base case of a hierarchy of depth two. The experiments, though preliminary, show that the algorithm manages to acquire the notions of and with reasonable accuracy, this despite seeing a small number of examples and receiving supervision on a fraction of them.


2020 ◽  
Author(s):  
Mario Crespo Miguel

Computational linguistics is the scientific study of language from a computational perspective. It aims is to provide computational models of natural language processing (NLP) and incorporate them into practical applications such as speech synthesis, speech recognition, automatic translation and many others where automatic processing of language is required. The use of good linguistic resources is crucial for the development of computational linguistics systems. Real world applications need resources which systematize the way linguistic information is structured in a certain language. There is a continuous effort to increase the number of linguistic resources available for the linguistic and NLP Community. Most of the existing linguistic resources have been created for English, mainly because most modern approaches to computational lexical semantics emerged in the United States. This situation is changing over time and some of these projects have been subsequently extended to other languages; however, in all cases, much time and effort need to be invested in creating such resources. Because of this, one of the main purposes of this work is to investigate the possibility of extending these resources to other languages such as Spanish. In this work, we introduce some of the most important resources devoted to lexical semantics, such as WordNet or FrameNet, and those focusing on Spanish such as 3LB-LEX or Adesse. Of these, this project focuses on FrameNet. The project aims to document the range of semantic and syntactic combinatory possibilities of words in English. Words are grouped according to the different frames or situations evoked by their meaning. If we focus on a particular topic domain like medicine and we try to describe it in terms of FrameNet, we probably would obtain frames representing it like CURE, formed by words like cure.v, heal.v or palliative.a or MEDICAL CONDITIONS with lexical units such as arthritis.n, asphyxia.n or asthma.n. The purpose of this work is to develop an automatic means of selecting frames from a particular domain and to translate them into Spanish. As we have stated, we will focus on medicine. The selection of the medical frames will be corpus-based, that is, we will extract all the frames that are statistically significant from a representative corpus. We will discuss why using a corpus-based approach is a reliable and unbiased way of dealing with this task. We will present an automatic method for the selection of FrameNet frames and, in order to make sure that the results obtained are coherent, we will contrast them with a previous manual selection or benchmark. Outcomes will be analysed by using the F-score, a measure widely used in this type of applications. We obtained a 0.87 F-score according to our benchmark, which demonstrates the applicability of this type of automatic approaches. The second part of the book is devoted to the translation of this selection into Spanish. The translation will be made using EuroWordNet, a extension of the Princeton WordNet for some European languages. We will explore different ways to link the different units of our medical FrameNet selection to a certain WordNet synset or set of words that have similar meanings. Matching the frame units to a specific synset in EuroWordNet allows us both to translate them into Spanish and to add new terms provided by WordNet into FrameNet. The results show how translation can be done quite accurately (95.6%). We hope this work can add new insight into the field of natural language processing.


Author(s):  
Edwin Battistella ◽  
Anne Lobeck

Recent analyses of word order and clause structure suggest that natural language syntax employs competing processes of Verb Fronting (VF) and Inflection Movement (IM) for the realization of Tense and Agreement (TNS and AGR) features on verb stems. As the names suggest, Verb Fronting is a process that raises a verb from the head of VP to INFL and INFL Movement is a rule that lowers the contents of INFL to the head of VP. Both rules are assumed to involve Chomsky-adjunction of the moved node to the target node and both are assumed to leave a trace.


1987 ◽  
Vol 32 (1) ◽  
pp. 33-34
Author(s):  
Greg N. Carlson
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document