scholarly journals Modeling Source Syntax and Semantics for Neural AMR Parsing

Author(s):  
DongLai Ge ◽  
Junhui Li ◽  
Muhua Zhu ◽  
Shoushan Li

Sequence-to-sequence (seq2seq) approaches formalize Abstract Meaning Representation (AMR) parsing as a translation task from a source sentence to a target AMR graph. However, previous studies generally model a source sentence as a word sequence but ignore the inherent syntactic and semantic information in the sentence. In this paper, we propose two effective approaches to explicitly modeling source syntax and semantics into neural seq2seq AMR parsing. The first approach linearizes source syntactic and semantic structure into a mixed sequence of words, syntactic labels, and semantic labels, while in the second approach we propose a syntactic and semantic structure-aware encoding scheme through a self-attentive model to explicitly capture syntactic and semantic relations between words. Experimental results on an English benchmark dataset show that our two approaches achieve significant improvement of 3.1% and 3.4% F1 scores over a strong seq2seq baseline.

2015 ◽  
Vol 2015 ◽  
pp. 1-9 ◽  
Author(s):  
Lin Guo ◽  
Wanli Zuo ◽  
Tao Peng ◽  
Lin Yue

The diversities of large-scale semistructured data make the extraction of implicit semantic information have enormous difficulties. This paper proposes an automatic and unsupervised method of text categorization, in which tree-shape structures are used to represent semantic knowledge and to explore implicit information by mining hidden structures without cumbersome lexical analysis. Mining implicit frequent structures in trees can discover both direct and indirect semantic relations, which largely enhances the accuracy of matching and classifying texts. The experimental results show that the proposed algorithm remarkably reduces the time and effort spent in training and classifying, which outperforms established competitors in correctness and effectiveness.


2021 ◽  
Vol 21 (1) ◽  
pp. 103-118
Author(s):  
Qusai Y. Shambour ◽  
Nidal M. Turab ◽  
Omar Y. Adwan

Abstract Electronic commerce has been growing gradually over the last decade as a new driver of the retail industry. In fact, the growth of e-Commerce has caused a significant rise in the number of choices of products and services offered on the Internet. This is where recommender systems come into play by providing meaningful recommendations to consumers based on their needs and interests effectively. However, recommender systems are still vulnerable to the scenarios of sparse rating data and cold start users and items. To develop an effective e-Commerce recommender system that addresses these limitations, we propose a Trust-Semantic enhanced Multi-Criteria CF (TSeMCCF) approach that exploits the trust relations and multi-criteria ratings of users, and the semantic relations of items within the CF framework to achieve effective results when sufficient rating data are not available. The experimental results have shown that the proposed approach outperforms other benchmark recommendation approaches with regard to recommendation accuracy and coverage.


Author(s):  
Tianming Wang ◽  
Xiaojun Wan ◽  
Hanqi Jin

Abstract meaning representation (AMR)-to-text generation is the challenging task of generating natural language texts from AMR graphs, where nodes represent concepts and edges denote relations. The current state-of-the-art methods use graph-to-sequence models; however, they still cannot significantly outperform the previous sequence-to-sequence models or statistical approaches. In this paper, we propose a novel graph-to-sequence model (Graph Transformer) to address this task. The model directly encodes the AMR graphs and learns the node representations. A pairwise interaction function is used for computing the semantic relations between the concepts. Moreover, attention mechanisms are used for aggregating the information from the incoming and outgoing neighbors, which help the model to capture the semantic information effectively. Our model outperforms the state-of-the-art neural approach by 1.5 BLEU points on LDC2015E86 and 4.8 BLEU points on LDC2017T10 and achieves new state-of-the-art performances.


2013 ◽  
Vol 427-429 ◽  
pp. 1649-1652
Author(s):  
Bo Chen ◽  
Chen Lv ◽  
Dong Hong Ji

Parsing Chinese is a key issue in NLP. Many controversies arise from Chinese special sentence patterns. This paper puts forward a novel model Feature Structure theory to resolve the semantic labeling of Chinese special sentence patterns. We analyze the difficulties in annotating these sentences, and compare Feature Structure with dependency structure. Feature Structure represents more semantic information and more semantic relations. Feature Graph is a recursive undirected graph, allows nesting and multiple correlations.


2014 ◽  
Vol 5 (2) ◽  
pp. 72-88
Author(s):  
Jinghao Song ◽  
Sheng-Uei Guan ◽  
Binge Zheng

In this paper, an Incremental Hyper-Sphere Partitioning (IHSP) approach to classification on the basis of Incremental Linear Encoding Genetic Algorithm (ILEGA) is proposed. Hyper-spheres approximating boundaries to a given classification problem, are searched with an incremental approach based on a unique combination of genetic algorithm (GA), output partitioning and pattern reduction. ILEGA is used to cope with the difficulty of classification problems caused by the complex pattern relationship and curse of dimensionality. Classification problems are solved by a simple and flexible chromosome encoding scheme which is different from that was proposed in Incremental Hyper-plane Partitioning (IHPP) for classification. The algorithm is tested with 7 datasets. The experimental results show that IHSP performs better compared with those classified using hyper-planes and normal GA.


2013 ◽  
Vol 4 (2) ◽  
pp. 67-79 ◽  
Author(s):  
Tao Yang ◽  
Sheng-Uei Guan ◽  
Jinghao Song ◽  
Binge Zheng ◽  
Mengying Cao ◽  
...  

The authors propose an incremental hyperplane partitioning approach to classification. Hyperplanes that are close to the classification boundaries of a given problem are searched using an incremental approach based upon Genetic Algorithm (GA). A new method - Incremental Linear Encoding based Genetic Algorithm (ILEGA) is proposed to tackle the difficulty of classification problems caused by the complex pattern relationship and curse of dimensionality. The authors solve classification problems through a simple and flexible chromosome encoding scheme, where the partitioning rules are encoded by linear equations rather than If-Then rules. Moreover, an incremental approach combined with output portioning and pattern reduction is applied to cope with the curse of dimensionality. The algorithm is tested with six datasets. The experimental results show that ILEGA outperform in both lower- and higher-dimensional problems compared with the original GA.


2020 ◽  
Vol 15 (1) ◽  
pp. 21-41
Author(s):  
Elizaveta Tarasova ◽  
Natalia Beliaeva

Abstract The present study analyses native speaker perceptions of the differences in the semantic structure of compounds and blends to specify whether the formal differences between compounds and blends are reflected on the semantic level. Viewpoints on blending vary, with some researchers considering it to be an instance of compounding (Kubozono, 1990), while others identify blending as an interim word formation mechanism between compounding and shortening (López Rúa, 2004). The semantic characteristics of English determinative blends and N+N subordinative compounds are compared by evaluating the differences in native speakers’ perceptions of the semantic relationships between constituents of the analysed structures. The results of two web-based experiments demonstrate that readers’ interpretations of both compounds and blends differ in terms of lexical indicators of semantic relations between the elements of these units. The experimental findings indicate that language users’ interpretation of both compounds and blends includes information on semantic relationships. The differences in the effect of the semantic relations on interpretations is likely to be connected to the degree of formal transparency of these units.


2020 ◽  
Vol 34 (10) ◽  
pp. 13751-13752
Author(s):  
Long Bai ◽  
Xiaolong Jin ◽  
Chuanzhi Zhuang ◽  
Xueqi Cheng

Distantly Supervised Relation Extraction (DSRE) has been widely studied, since it can automatically extract relations from very large corpora. However, existing DSRE methods only use little semantic information about entities, such as the information of entity type. Thus, in this paper, we propose a method for integrating entity type information into a neural network based DSRE model. It also adopts two attention mechanisms, namely, sentence attention and type attention. The former selects the representative sentences for a sentence bag, while the latter selects appropriate type information for entities. Experimental comparison with existing methods on a benchmark dataset demonstrates its merits.


2005 ◽  
Vol 04 (02) ◽  
pp. 133-138
Author(s):  
D. Manjula ◽  
T. V. Geetha

The traditional Boolean word-based approach to information retrieval (IR) considers only words for indexing. Irrelevant information is retrieved because of non-inclusion of semantic information like word senses and word context. In this work, the importance of representing the documents along another semantic dimension in addition to sense context information is considered. The incorporation of semantic relations as an additional dimension gives a better insight into the interpretation of the document. The micro-contexts generated from the documents are also used in indexing. The retrieval performance is measured in terms of precision and recall. The results tabulated show better performance.


Author(s):  
Yihe Liu ◽  
◽  
Huaxiang Zhang ◽  
Li Liu ◽  
Lili Meng ◽  
...  

Existing cross-media retrieval methods usually learn one same latent subspace for different retrieval tasks, which can only achieve a suboptimal retrieval. In this paper, we propose a novel cross-media retrieval method based on Query Modality and Semi-supervised Regularization (QMSR). Taking the cross-media retrieval between images and texts for example, QMSR learns two couples of mappings for different retrieval tasks (i.e. using images to search texts (Im2Te) or using texts to search images (Te2Im)) instead of learning one couple of mappings. QMSR learns two couples of projections by optimizing the correlation between images and texts and the semantic information of query modality (image or text), and integrates together the semi-supervised regularization, the structural information among both labeled and unlabeled data of query modality to transform different media objects from original feature spaces into two different isomorphic subspaces (Im2Te common subspace and Te2Im common subspace). Experimental results show the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document