scholarly journals A Boundary Determined Neural Model for Relation Extraction

Author(s):  
RUI XUE TANG

Existing models extract entity relations only after two entity spans have been precisely extracted that influenced the performance of relation extraction. Compared with recognizing entity spans, because the boundary has a small granularity and a less ambiguity, it can be detected precisely and incorporated to learn better representation. Motivated by the strengths of boundary, we propose a boundary determined neural (BDN) model, which leverages boundaries as task-related cues to predict the relation labels. Our model can predict high-quality relation instance via the pairs of boundaries, which can relieve error propagation problem. Moreover, our model fuses with boundary-relevant information encoding to represent distributed representation to improve the ability of capturing semantic and dependency information, which can increase the discriminability of neural network. Experiments show that our model achieves state-of-the-art performances on ACE05 corpus.

2020 ◽  
Vol 34 (10) ◽  
pp. 13751-13752
Author(s):  
Long Bai ◽  
Xiaolong Jin ◽  
Chuanzhi Zhuang ◽  
Xueqi Cheng

Distantly Supervised Relation Extraction (DSRE) has been widely studied, since it can automatically extract relations from very large corpora. However, existing DSRE methods only use little semantic information about entities, such as the information of entity type. Thus, in this paper, we propose a method for integrating entity type information into a neural network based DSRE model. It also adopts two attention mechanisms, namely, sentence attention and type attention. The former selects the representative sentences for a sentence bag, while the latter selects appropriate type information for entities. Experimental comparison with existing methods on a benchmark dataset demonstrates its merits.


2022 ◽  
Vol 12 (2) ◽  
pp. 715
Author(s):  
Luodi Xie ◽  
Huimin Huang ◽  
Qing Du

Knowledge graph (KG) embedding has been widely studied to obtain low-dimensional representations for entities and relations. It serves as the basis for downstream tasks, such as KG completion and relation extraction. Traditional KG embedding techniques usually represent entities/relations as vectors or tensors, mapping them in different semantic spaces and ignoring the uncertainties. The affinities between entities and relations are ambiguous when they are not embedded in the same latent spaces. In this paper, we incorporate a co-embedding model for KG embedding, which learns low-dimensional representations of both entities and relations in the same semantic space. To address the issue of neglecting uncertainty for KG components, we propose a variational auto-encoder that represents KG components as Gaussian distributions. In addition, compared with previous methods, our method has the advantages of high quality and interpretability. Our experimental results on several benchmark datasets demonstrate our model’s superiority over the state-of-the-art baselines.


2018 ◽  
Vol 232 ◽  
pp. 01061
Author(s):  
Danhua Li ◽  
Xiaofeng Di ◽  
Xuan Qu ◽  
Yunfei Zhao ◽  
Honggang Kong

Pedestrian detection aims to localize and recognize every pedestrian instance in an image with a bounding box. The current state-of-the-art method is Faster RCNN, which is such a network that uses a region proposal network (RPN) to generate high quality region proposals, while Fast RCNN is used to classifiers extract features into corresponding categories. The contribution of this paper is integrated low-level features and high-level features into a Faster RCNN-based pedestrian detection framework, which efficiently increase the capacity of the feature. Through our experiments, we comprehensively evaluate our framework, on the Caltech pedestrian detection benchmark and our methods achieve state-of-the-art accuracy and present a competitive result on Caltech dataset.


Author(s):  
S. Soltic ◽  
N. Kasabov

The human brain has an amazing ability to recognize hundreds of thousands of different tastes. The question is: can we build artificial systems that can achieve this level of complexity? Such systems would be useful in biosecurity, the chemical and food industry, security, in home automation, etc. The purpose of this chapter is to explore how spiking neurons could be employed for building biologically plausible and efficient taste recognition systems. It presents an approach based on a novel spiking neural network model, the evolving spiking neural network with population coding (ESNN-PC), which is characterized by: (i) adaptive learning, (ii) knowledge discovery and (iii) accurate classification. ESNN-PC is used on a benchmark taste problem where the effectiveness of the information encoding, the quality of extracted rules and the model’s adaptive properties are explored. Finally, applications of ESNN-PC in recognition of the increasing interest in robotics and pervasive computing are suggested.


2016 ◽  
Vol 2016 ◽  
pp. 1-9 ◽  
Author(s):  
Lei Hua ◽  
Chanqin Quan

The state-of-the-art methods for protein-protein interaction (PPI) extraction are primarily based on kernel methods, and their performances strongly depend on the handcraft features. In this paper, we tackle PPI extraction by using convolutional neural networks (CNN) and propose a shortest dependency path based CNN (sdpCNN) model. The proposed method(1)only takes the sdp and word embedding as input and(2)could avoid bias from feature selection by using CNN. We performed experiments on standard Aimed and BioInfer datasets, and the experimental results demonstrated that our approach outperformed state-of-the-art kernel based methods. In particular, by tracking the sdpCNN model, we find that sdpCNN could extract key features automatically and it is verified that pretrained word embedding is crucial in PPI task.


Author(s):  
K. Rahmani ◽  
H. Mayer

In this paper we present a pipeline for high quality semantic segmentation of building facades using Structured Random Forest (SRF), Region Proposal Network (RPN) based on a Convolutional Neural Network (CNN) as well as rectangular fitting optimization. Our main contribution is that we employ features created by the RPN as channels in the SRF.We empirically show that this is very effective especially for doors and windows. Our pipeline is evaluated on two datasets where we outperform current state-of-the-art methods. Additionally, we quantify the contribution of the RPN and the rectangular fitting optimization on the accuracy of the result.


2020 ◽  
Author(s):  
Yao-Ting Huang ◽  
Po-Yu Liu ◽  
Pei-Wen Shih

AbstractNanopore sequencing has been widely used for reconstruction of a variety of microbial genomes. Owing to the higher error rate, the assembled genome requires further error correction. Existing methods erase many of these errors via deep neural network trained from Nanopore reads. However, quite a few systematic errors are still left on the genome. This paper proposed a new model trained from homologous sequences extracted from closely-related genomes, which provides valuable features missed in Nanopore reads. The developed program (called Homopolish) outperforms the state-of-the-art Racon/Medaka and MarginPolish/HELEN pipelines in metagenomic and isolates of bacteria, viruses and fungi. When Homopolish is combined with Medaka or with HELEN, the genomes quality can exceed Q50 on R9.4 flowcells. The genome quality can be also improved on R10.3 flowcells (Q50-Q90). We proved that Nanopore-only sequencing can now produce high-quality genomes without the need of Illumina hybrid sequencing.


Author(s):  
Junyou Li ◽  
Gong Cheng ◽  
Qingxia Liu ◽  
Wen Zhang ◽  
Evgeny Kharlamov ◽  
...  

In a large-scale knowledge graph (KG), an entity is often described by a large number of triple-structured facts. Many applications require abridged versions of entity descriptions, called entity summaries. Existing solutions to entity summarization are mainly unsupervised. In this paper, we present a supervised approach NEST that is based on our novel neural model to jointly encode graph structure and text in KGs and generate high-quality diversified summaries. Since it is costly to obtain manually labeled summaries for training, our supervision is weak as we train with programmatically labeled data which may contain noise but is free of manual work. Evaluation results show that our approach significantly outperforms the state of the art on two public benchmarks.


Author(s):  
Pankaj Gupta ◽  
Subburam Rajaram ◽  
Hinrich Schütze ◽  
Thomas Runkler

Past work in relation extraction mostly focuses on binary relation between entity pairs within single sentence. Recently, the NLP community has gained interest in relation extraction in entity pairs spanning multiple sentences. In this paper, we propose a novel architecture for this task: inter-sentential dependency-based neural networks (iDepNN). iDepNN models the shortest and augmented dependency paths via recurrent and recursive neural networks to extract relationships within (intra-) and across (inter-) sentence boundaries. Compared to SVM and neural network baselines, iDepNN is more robust to false positives in relationships spanning sentences. We evaluate our models on four datasets from newswire (MUC6) and medical (BioNLP shared task) domains that achieve state-of-the-art performance and show a better balance in precision and recall for inter-sentential relationships. We perform better than 11 teams participating in the BioNLP shared task 2016 and achieve a gain of 5.2% (0.587 vs 0.558) in F1 over the winning team. We also release the crosssentence annotations for MUC6.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 539
Author(s):  
Yongbin Qin ◽  
Weizhe Yang ◽  
Kai Wang ◽  
Ruizhang Huang ◽  
Feng Tian ◽  
...  

Relation extraction aims to extract semantic relationships between two specified named entities in a sentence. Because a sentence often contains several named entity pairs, a neural network is easily bewildered when learning a relation representation without position and semantic information about the considered entity pair. In this paper, instead of learning an abstract representation from raw inputs, task-related entity indicators are designed to enable a deep neural network to concentrate on the task-relevant information. By implanting entity indicators into a relation instance, the neural network is effective for encoding syntactic and semantic information about a relation instance. Organized, structured and unified entity indicators can make the similarity between sentences that possess the same or similar entity pair and the internal symmetry of one sentence more obviously. In the experiment, a systemic analysis was conducted to evaluate the impact of entity indicators on relation extraction. This method has achieved state-of-the-art performance, exceeding the compared methods by more than 3.7%, 5.0% and 11.2% in F1 score on the ACE Chinese corpus, ACE English corpus and Chinese literature text corpus, respectively.


Sign in / Sign up

Export Citation Format

Share Document