Near-Synonymy and Lexical Choice

2002 ◽  
Vol 28 (2) ◽  
pp. 105-144 ◽  
Author(s):  
Philip Edmonds ◽  
Graeme Hirst

We develop a new computational model for representing the fine-grained meanings of near-synonyms and the differences between them. We also develop a lexical-choice process that can decide which of several near-synonyms is most appropriate in a particular situation. This research has direct applications in machine translation and text generation. We first identify the problems of representing near-synonyms in a computational lexicon and show that no previous model adequately accounts for near-synonymy. We then propose a preliminary theory to account for near-synonymy, relying crucially on the notion of granularity of representation, in which the meaning of a word arises out of a context-dependent combination of a context-independent core meaning and a set of explicit differences to its near-synonyms. That is, near-synonyms cluster together. We then develop a clustered model of lexical knowledge, derived from the conventional ontological model. The model cuts off the ontology at a coarse grain, thus avoiding an awkward proliferation of language-dependent concepts in the ontology, yet maintaining the advantages of efficient computation and reasoning. The model groups near-synonyms into subconceptual clusters that are linked to the ontology. A cluster differentiates near-synonyms in terms of fine-grained aspects of denotation, implication, expressed attitude, and style. The model is general enough to account for other types of variation, for instance, in collocational behavior. An efficient, robust, and flexible fine-grained lexical-choice process is a consequence of a clustered model of lexical knowledge. To make it work, we formalize criteria for lexical choice as preferences to express certain concepts with varying indirectness, to express attitudes, and to establish certain styles. The lexical-choice process itself works on two tiers: between clusters and between near-synonyns of clusters. We describe our prototype implementation of the system, called I-Saurus.

2006 ◽  
Vol 33 (1) ◽  
pp. 99-124 ◽  
Author(s):  
BHUVANA NARASIMHAN ◽  
MARIANNE GULLBERG

Children are able to take multiple perspectives in talking about entities and events. But the nature of children's sensitivities to the complex patterns of perspective-taking in adult language is unknown. We examine perspective-taking in four- and six-year-old Tamil-speaking children describing placement events, as reflected in the use of a general placement verb (veyyii ‘put’) versus two fine-grained caused posture expressions specifying orientation, either vertical (nikka veyyii ‘make stand’) or horizontal (paDka veyyii ‘make lie’). We also explore whether animacy systematically promotes shifts to a fine-grained perspective. The results show that four- and six-year-olds switch perspectives as flexibly and systematically as adults do. Animacy influences shifts to a fine-grained perspective similarly across age groups. However, unexpectedly, six-year-olds also display greater overall sensitivity to orientation, preferring the vertical over the horizontal caused posture expression. Despite early flexibility, the factors governing the patterns of perspective-taking on events are undergoing change even in later childhood, reminiscent of U-shaped semantic reorganizations observed in children's lexical knowledge. The present study points to the intriguing possibility that mechanisms that operate at the level of semantics could also influence subtle patterns of lexical choice and perspective-shifts.


Author(s):  
Zhuliang Yao ◽  
Shijie Cao ◽  
Wencong Xiao ◽  
Chen Zhang ◽  
Lanshun Nie

In trained deep neural networks, unstructured pruning can reduce redundant weights to lower storage cost. However, it requires the customization of hardwares to speed up practical inference. Another trend accelerates sparse model inference on general-purpose hardwares by adopting coarse-grained sparsity to prune or regularize consecutive weights for efficient computation. But this method often sacrifices model accuracy. In this paper, we propose a novel fine-grained sparsity approach, Balanced Sparsity, to achieve high model accuracy with commercial hardwares efficiently. Our approach adapts to high parallelism property of GPU, showing incredible potential for sparsity in the widely deployment of deep learning services. Experiment results show that Balanced Sparsity achieves up to 3.1x practical speedup for model inference on GPU, while retains the same high model accuracy as finegrained sparsity.


2022 ◽  
Vol 40 (3) ◽  
pp. 1-29
Author(s):  
Peijie Sun ◽  
Le Wu ◽  
Kun Zhang ◽  
Yu Su ◽  
Meng Wang

Review based recommendation utilizes both users’ rating records and the associated reviews for recommendation. Recently, with the rapid demand for explanations of recommendation results, reviews are used to train the encoder–decoder models for explanation text generation. As most of the reviews are general text without detailed evaluation, some researchers leveraged auxiliary information of users or items to enrich the generated explanation text. Nevertheless, the auxiliary data is not available in most scenarios and may suffer from data privacy problems. In this article, we argue that the reviews contain abundant semantic information to express the users’ feelings for various aspects of items, while these information are not fully explored in current explanation text generation task. To this end, we study how to generate more fine-grained explanation text in review based recommendation without any auxiliary data. Though the idea is simple, it is non-trivial since the aspect is hidden and unlabeled. Besides, it is also very challenging to inject aspect information for generating explanation text with noisy review input. To solve these challenges, we first leverage an advanced unsupervised neural aspect extraction model to learn the aspect-aware representation of each review sentence. Thus, users and items can be represented in the aspect space based on their historical associated reviews. After that, we detail how to better predict ratings and generate explanation text with the user and item representations in the aspect space. We further dynamically assign review sentences which contain larger proportion of aspect words with larger weights to control the text generation process, and jointly optimize rating prediction accuracy and explanation text generation quality with a multi-task learning framework. Finally, extensive experimental results on three real-world datasets demonstrate the superiority of our proposed model for both recommendation accuracy and explainability.


2000 ◽  
Vol 2000 (0) ◽  
pp. 71-72
Author(s):  
Tsuneo TAKAHASHI ◽  
Masahiro ISHIHARA ◽  
Shin-ichi BABA ◽  
Kimio HAYASHI ◽  
Takashi KONISHI

2011 ◽  
Vol 90-93 ◽  
pp. 1373-1382 ◽  
Author(s):  
Zhen Ming Shi ◽  
You Quan Wang ◽  
Jian Feng Chen ◽  
Zu Guang Shang ◽  
Xiao Tao He

The fills of barrier dams commonly result from high-speed landslides debris flow. In this paper, four model tests were conducted to study the effect of fill size on the stability of barrier dams. The failure time, failure mode, pore pressures and earth pressures were then observed and analyzed. The results show that barrier dams composed of coarse-grains or well-graded fills are more stable than those composed of fine-grained fills; coarse-grain-dams are more sensitive to the rising of water level than fine-grain-dams; the failure mode of coarse-grain-dams is usually overflowing-erosion and the barrier dams usually fail from the top of dams; the failure mode of fine-grain-dams is sliding and the barrier dams fail initially from the slope downstream.


2010 ◽  
Vol 15 (1) ◽  
pp. 56-87 ◽  
Author(s):  
Dilin Liu

Using the Corpus of Contemporary American English as the source data and employing a corpus-based behavioral profile (BP) approach, this study examines the internal semantic structure of a set of five near-synonyms (chief, main, major, primary, and principal).1 By focusing on their distributional patterns, especially the typical types of nouns that they each modify, the study has identified several important fine-grained semantic and usage differences among the five near-synonyms and produced a meaningful delineation of their internal semantic structure. Some of the findings of the study challenge several existing understandings of these adjectives’ meanings and usage patterns. Furthermore, the results of the study have affirmed (i) the theory and applicability of the BP approach for studying the semantic and usage patterns of synonyms in a set, and (ii) previous research findings about the co-occurrents of adjectives that best capture the essence of the semantics of adjectives, especially attributive adjectives.


2006 ◽  
Vol 32 (2) ◽  
pp. 223-262 ◽  
Author(s):  
Diana Inkpen ◽  
Graeme Hirst

Choosing the wrong word in a machine translation or natural language generation system can convey unwanted connotations, implications, or attitudes. The choice between near-synonyms such as error, mistake, slip, and blunder—words that share the same core meaning, but differ in their nuances—can be made only if knowledge about their differences is available. We present a method to automatically acquire a new type of lexical resource: a knowledge base of near-synonym differences. We develop an unsupervised decision-list algorithm that learns extraction patterns from a special dictionary of synonym differences. The patterns are then used to extract knowledge from the text of the dictionary. The initial knowledge base is later enriched with information from other machine-readable dictionaries. Information about the collocational behavior of the near-synonyms is acquired from free text. The knowledge base is used by Xenon, a natural language generation system that shows how the new lexical resource can be used to choose the best near-synonym in specific situations.


Sign in / Sign up

Export Citation Format

Share Document