scholarly journals Form-Independent Meaning Representation for Eventualities

Author(s):  
Mark Steedman

Linguists and philosophers since Aristotle have attempted to reduce natural language semantics in general, and the semantics of eventualities in particular, to a ‘language of mind’, expressed in terms of various collections of underlying language-independent primitive concepts. While such systems have proved insightful enough to suggest that such a universal conceptual representation is in some sense psychologically real, the primitive relations proposed, based on oppositions like agent-patient, event-state, etc., have remained incompletely convincing. This chapter proposes that the primitive concepts of the language of mind are ‘hidden’, or latent, and must be discovered automatically by detecting consistent patterns of entailment in the vast amounts of text that are made available by the internet using automatic syntactic parsers and machine learning to mine a form- and language-independent semantic representation language for natural language semantics. The representations involved combine a distributional representation of ambiguity with a language of logical form.

Author(s):  
Yashaswini S

To understand language, we need an understanding of the world around us. Language describes the world and provides symbols with which we represent meaning. Still, much knowledge about the world is so obvious that it is rarely explicitly stated. It is uncommon for people to state that chairs are usually on the floor and upright, and that you usually eat a cake from a plate on a table. Knowledge of such common facts provides the context within which people communicate with language. Therefore, to create practical systems that can interact with the world and communicate with people, we need to leverage such knowledge to interpret language in context. Scene generation can be used to achieve an ability to generate 3D scenes on basis of text description. A model capable of learning natural language semantics or interesting pattern to generate abstract idea behind scene composition is interesting [1].Scene generation from text involves several fields like NLP, artificial intelligence, computer vision and machine learning. This paper focuses on optimally arranging objects in a room with focus on the orientation of the objects with respect to the floor, wall and ceiling of a room along with textures. Our model suggest a novel framework which can be used as a tool to generate scene where anyone without 3D Modeling.


Author(s):  
Pauline Jacobson

This chapter examines the currently fashionable notion of ‘experimental semantics’, and argues that most work in natural language semantics has always been experimental. The oft-cited dichotomy between ‘theoretical’ (or ‘armchair’) and ‘experimental’ is bogus and should be dropped form the discourse. The same holds for dichotomies like ‘intuition-based’ (or ‘thought experiments’) vs. ‘empirical’ work (and ‘real experiments’). The so-called new ‘empirical’ methods are often nothing more than collecting the large-scale ‘intuitions’ or, doing multiple thought experiments. Of course the use of multiple subjects could well allow for a better experiment than the more traditional single or few subject methodologies. But whether or not this is the case depends entirely on the question at hand. In fact, the chapter considers several multiple-subject studies and shows that the particular methodology in those cases does not necessarily provide important insights, and the chapter argues that some its claimed benefits are incorrect.


2005 ◽  
Vol 28 (1) ◽  
pp. 73-116
Author(s):  
Michael Mccord ◽  
Arendse Bernth

Sign in / Sign up

Export Citation Format

Share Document