scholarly journals RTJTN: Relational Triplet Joint Tagging Network for Joint Entity and Relation Extraction

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Zhenyu Yang ◽  
Lei Wang ◽  
Bo Ma ◽  
Yating Yang ◽  
Rui Dong ◽  
...  

Extracting entities and relations from unstructured sentences is one of the most concerned tasks in the field of natural language processing. However, most existing works process entity and relation information in a certain order and suffer from the error iteration. In this paper, we introduce a relational triplet joint tagging network (RTJTN), which is divided into joint entities and relations tagging layer and relational triplet judgment layer. In the joint tagging layer, instead of extracting entity and relation separately, we propose a tagging method that allows the model to simultaneously extract entities and relations in unstructured sentences to prevent the error iteration; and, in order to solve the relation overlapping problem, we propose a relational triplet judgment network to judge the correct triples among the group of triples with the same relation in a sentence. In the experiment, we evaluate our network on the English public dataset NYT and the Chinese public datasets DuIE 2.0 and CMED. The F1 score of our model is improved by 1.1, 6.0, and 5.1 compared to the best baseline model on NYT, DuIE 2.0, and CMED datasets, respectively. In-depth analysis of the model’s performance on overlapping problems and sentence complexity problems shows that our model has different gains in all cases.

Author(s):  
Yue Yuan ◽  
Xiaofei Zhou ◽  
Shirui Pan ◽  
Qiannan Zhu ◽  
Zeliang Song ◽  
...  

Joint extraction of entities and relations is an important task in natural language processing (NLP), which aims to capture all relational triplets from plain texts. This is a big challenge due to some of the triplets extracted from one sentence may have overlapping entities. Most existing methods perform entity recognition followed by relation detection between every possible entity pairs, which usually suffers from numerous redundant operations. In this paper, we propose a relation-specific attention network (RSAN) to handle the issue. Our RSAN utilizes relation-aware attention mechanism to construct specific sentence representations for each relation, and then performs sequence labeling to extract its corresponding head and tail entities. Experiments on two public datasets show that our model can effectively extract overlapping triplets and achieve state-of-the-art performance.


2013 ◽  
Vol 30 (1) ◽  
pp. 45-75 ◽  
Author(s):  
Fouad Zablith ◽  
Grigoris Antoniou ◽  
Mathieu d'Aquin ◽  
Giorgos Flouris ◽  
Haridimos Kondylakis ◽  
...  

AbstractOntology evolution aims at maintaining an ontology up to date with respect to changes in the domain that it models or novel requirements of information systems that it enables. The recent industrial adoption of Semantic Web techniques, which rely on ontologies, has led to the increased importance of the ontology evolution research. Typical approaches to ontology evolution are designed as multiple-stage processes combining techniques from a variety of fields (e.g., natural language processing and reasoning). However, the few existing surveys on this topic lack an in-depth analysis of the various stages of the ontology evolution process. This survey extends the literature by adopting a process-centric view of ontology evolution. Accordingly, we first provide an overall process model synthesized from an overview of the existing models in the literature. Then we survey the major approaches to each of the steps in this process and conclude on future challenges for techniques aiming to solve that particular stage.


2019 ◽  
Author(s):  
Peng Su ◽  
Gang Li ◽  
Cathy Wu ◽  
K. Vijay-Shanker

AbstractSignificant progress has been made in applying deep learning on natural language processing tasks recently. However, deep learning models typically require a large amount of annotated training data while often only small labeled datasets are available for many natural language processing tasks in biomedical literature. Building large-size datasets for deep learning is expensive since it involves considerable human effort and usually requires domain expertise in specialized fields. In this work, we consider augmenting manually annotated data with large amounts of data using distant supervision. However, data obtained by distant supervision is often noisy, we first apply some heuristics to remove some of the incorrect annotations. Then using methods inspired from transfer learning, we show that the resulting models outperform models trained on the original manually annotated sets.


Events and time are two major key terms in natural language processing due to the various event-oriented tasks these are become an essential terms in information extraction. In natural language processing and information extraction or retrieval event and time leads to several applications like text summaries, documents summaries, and question answering systems. In this paper, we present events-time graph as a new way of construction for event-time based information from text. In this event-time graph nodes are events, whereas edges represent the temporal and co-reference relations between events. In many of the previous researches of natural language processing mainly individually focused on extraction tasks and in domain-specific way but in this work we present extraction and representation of the relationship between events- time by representing with event time graph construction. Our overall system construction is in three-step process that performs event extraction, time extraction, and representing relation extraction. Each step is at a performance level comparable with the state of the art. We present Event extraction on MUC data corpus annotated with events mentions on which we train and evaluate our model. Next, we present time extraction the model of times tested for several news articles from Wikipedia corpus. Next is to represent event time relation by representation by next constructing event time graphs. Finally, we evaluate the overall quality of event graphs with the evaluation metrics and conclude the observations of the entire work


2020 ◽  
Vol 10 (18) ◽  
pp. 6429
Author(s):  
SungMin Yang ◽  
SoYeop Yoo ◽  
OkRan Jeong

Along with studies on artificial intelligence technology, research is also being carried out actively in the field of natural language processing to understand and process people’s language, in other words, natural language. For computers to learn on their own, the skill of understanding natural language is very important. There are a wide variety of tasks involved in the field of natural language processing, but we would like to focus on the named entity registration and relation extraction task, which is considered to be the most important in understanding sentences. We propose DeNERT-KG, a model that can extract subject, object, and relationships, to grasp the meaning inherent in a sentence. Based on the BERT language model and Deep Q-Network, the named entity recognition (NER) model for extracting subject and object is established, and a knowledge graph is applied for relation extraction. Using the DeNERT-KG model, it is possible to extract the subject, type of subject, object, type of object, and relationship from a sentence, and verify this model through experiments.


Symmetry ◽  
2020 ◽  
Vol 12 (3) ◽  
pp. 354
Author(s):  
Tiberiu-Marian Georgescu

This paper describes the development and implementation of a natural language processing model based on machine learning which performs cognitive analysis for cybersecurity-related documents. A domain ontology was developed using a two-step approach: (1) the symmetry stage and (2) the machine adjustment. The first stage is based on the symmetry between the way humans represent a domain and the way machine learning solutions do. Therefore, the cybersecurity field was initially modeled based on the expertise of cybersecurity professionals. A dictionary of relevant entities was created; the entities were classified into 29 categories and later implemented as classes in a natural language processing model based on machine learning. After running successive performance tests, the ontology was remodeled from 29 to 18 classes. Using the ontology, a natural language processing model based on a supervised learning model was defined. We trained the model using sets of approximately 300,000 words. Remarkably, our model obtained an F1 score of 0.81 for named entity recognition and 0.58 for relation extraction, showing superior results compared to other similar models identified in the literature. Furthermore, in order to be easily used and tested, a web application that integrates our model as the core component was developed.


Author(s):  
Jie Liu ◽  
Shaowei Chen ◽  
Bingquan Wang ◽  
Jiaxin Zhang ◽  
Na Li ◽  
...  

Joint entity and relation extraction is critical for many natural language processing (NLP) tasks, which has attracted increasing research interest. However, it is still faced with the challenges of identifying the overlapping relation triplets along with the entire entity boundary and detecting the multi-type relations. In this paper, we propose an attention-based joint model, which mainly contains an entity extraction module and a relation detection module, to address the challenges. The key of our model is devising a supervised multi-head self-attention mechanism as the relation detection module to learn the token-level correlation for each relation type separately. With the attention mechanism, our model can effectively identify overlapping relations and flexibly predict the relation type with its corresponding intensity. To verify the effectiveness of our model, we conduct comprehensive experiments on two benchmark datasets. The experimental results demonstrate that our model achieves state-of-the-art performances.


Author(s):  
S. Kavibharathi ◽  
S. Lakshmi Priyankaa ◽  
M.S. Kaviya ◽  
Dr.S. Vasanthi

The World Wide Web such as social networking sites and blog comments forum has huge user comments emotion data from different social events and product brand and arguments in the form of political views. Generate a heap. Reflects the user's mood on the network, the reader, has a huge impact on product suppliers and politicians. The challenge for the credibility of the analysis is the lack of sufficient tag data in the Natural Language Processing (NLP) field. Positive and negative classify content based on user feedback, live chat, whether the user is used as the base for a wide range of tasks related to the text content of a meaningful assessment. Data collection, and function number for all variants. A recurrent neural network is very good text classification. Analyzing unstructured form from social media data, reasonable structure, and analyzes attach great importance to note for this emotion. Emotional rewiring can use natural language processing sentiment analysis to predict. In the method by the Recurrent Neural Networks (RNNs) of the proposed prediction chat live chat into sentiment analysis. Sentiment analysis and in-depth learning technology have been integrated into the solution to this problem, with their deep learning model automatic learning function is active. Using a Recurrent Neural Networks (RNNs) reputation analysis to solve various problems and language problems of text analysis and visualization product retrospective sentiment classifier cross-depth analysis of the learning model implementation.


Sign in / Sign up

Export Citation Format

Share Document