scholarly journals Detecting the Presence of Named Entities in Bengali: Corpus and Experiments

Author(s):  
Farzana Rashid ◽  
Fahmida Hamid

Named Entity Recognition (NER) belongs to the field of Information Extraction (IE) and Natural LanguageProcessing (NLP). NER aims to find and categorize named entities present in the textual data into recognizable classes. Named entities play vital roles in other related fields like question-answering, relationship extraction, and machine translation. Researchers have done a significant amount of work (e.g., dataset construction and analysis) in this direction for several languages like English, Spanish, Chinese, Russian, Arabic, to name a few. We do not find a comparable amount of work for several South-Asian languages like Bengali/Bangla. Hence, as part of the initial phase, we have constructed a qualitative dataset in Bengali.In this paper, we identify the presence of Named Entities (NEs) in the Bengali text (sentences), classify them in standardized categories, and test whether an automatic detection of NE is possible. We present a new corpus and experimental results. Our dataset, annotated by multiple humans, shows promising results (F-measures ranging from 0.72 to 0.84) in different setups (support vector machine (SVM) setups with simple language features and Long-Short Term Memory (LSTM) setup with various word embedding).

2021 ◽  
Vol 11 (18) ◽  
pp. 8682
Author(s):  
Ching-Sheng Lin ◽  
Jung-Sing Jwo ◽  
Cheng-Hsiung Lee

Clinical Named Entity Recognition (CNER) focuses on locating named entities in electronic medical records (EMRs) and the obtained results play an important role in the development of intelligent biomedical systems. In addition to the research in alphabetic languages, the study of non-alphabetic languages has attracted considerable attention as well. In this paper, a neural model is proposed to address the extraction of entities from EMRs written in Chinese. To avoid erroneous noise being caused by the Chinese word segmentation, we employ the character embeddings as the only feature without extra resources. In our model, concatenated n-gram character embeddings are used to represent the context semantics. The self-attention mechanism is then applied to model long-range dependencies of embeddings. The concatenation of the new representations obtained by the attention module is taken as the input to bidirectional long short-term memory (BiLSTM), followed by a conditional random field (CRF) layer to extract entities. The empirical study is conducted on the CCKS-2017 Shared Task 2 dataset to evaluate our method and the experimental results show that our model outperforms other approaches.


Author(s):  
V. A. Korzun ◽  

This paper provides results of participation in the Russian Relation Extraction for Business shared task (RuREBus) within DialogueEvaluation 2020. Our team took the first place among 5 other teams in Relation Extraction with Named Entities task. The experiments showed that the best model is based on R-BERT model. R-BERT achieved significant result in comparison with models based on Convolutional or Recurrent Neural Networks on the SemEval-2010 task 8 relational dataset. In order to adapt this model to RuREBus task we also added some modifications like negative sampling. In addition, we have tested other models for Relation Extraction and Named Entity Recognition tasks.


2014 ◽  
Vol 571-572 ◽  
pp. 339-344
Author(s):  
Yong He Lu ◽  
Ming Hui Liang

The answer extraction model has a direct impact on the performance of the Automatic Question Answering System (QA System). In this paper, an answer extraction model based on named entity recognition was presented. It mainly answers specific questions whose answers are related with the named entity. Firstly, it classified the questions according to answer types. And then it identified named entities with suitable types in the fragmented information. Finally, it got the final answer based on scores. The experiments in the paper proved that the model could accurately answer the questions provided by Text REtrieval Conference (TREC). Thus, the proposed model is easy to implement and its performance is good for specific questions.


Information ◽  
2020 ◽  
Vol 11 (2) ◽  
pp. 82
Author(s):  
SaiKiranmai Gorla ◽  
Lalita Bhanu Murthy Neti ◽  
Aruna Malapati

Named entity recognition (NER) is a fundamental step for many natural language processing tasks and hence enhancing the performance of NER models is always appreciated. With limited resources being available, NER for South-East Asian languages like Telugu is quite a challenging problem. This paper attempts to improve the NER performance for Telugu using gazetteer-related features, which are automatically generated using Wikipedia pages. We make use of these gazetteer features along with other well-known features like contextual, word-level, and corpus features to build NER models. NER models are developed using three well-known classifiers—conditional random field (CRF), support vector machine (SVM), and margin infused relaxed algorithms (MIRA). The gazetteer features are shown to improve the performance, and theMIRA-based NER model fared better than its counterparts SVM and CRF.


2020 ◽  
Author(s):  
Vladislav Mikhailov ◽  
Tatiana Shavrina

Named Entity Recognition (NER) is a fundamental task in the fields of natural language processing and information extraction. NER has been widely used as a standalone tool or an essential component in a variety of applications such as question answering, dialogue assistants and knowledge graphs development. However, training reliable NER models requires a large amount of labelled data which is expensive to obtain, particularly in specialized domains. This paper describes a method to learn a domain-specific NER model for an arbitrary set of named entities when domain-specific supervision is not available. We assume that the supervision can be obtained with no human effort, and neural models can learn from each other. The code, data and models are publicly available.


Author(s):  
Yashvardhan Sharma ◽  
Rupal Bhargava ◽  
Bapiraju Vamsi Tadikonda

With the increase of internet applications and social media platforms there has been an increase in the informal way of text communication. People belonging to different regions tend to mix their regional language with English on social media text. This has been the trend with many multilingual nations now and is commonly known as code mixing. In code mixing, multiple languages are used within a statement. The problem of named entity recognition (NER) is a well-researched topic in natural language processing (NLP), but the present NER systems tend to perform inefficiently on code-mixed text. This paper proposes three approaches to improve named entity recognizers for handling code-mixing. The first approach is based on machine learning techniques such as support vector machines and other tree-based classifiers. The second approach is based on neural networks and the third approach uses long short-term memory (LSTM) architecture to solve the problem.


2011 ◽  
Vol 34 (1) ◽  
pp. 35-67 ◽  
Author(s):  
Asif Ekbal ◽  
Sivaji Bandyopadhyay

Named Entity Recognition (NER) aims to classify each word of a document into predefined target named entity (NE) classes and is nowadays considered to be fundamental for many Natural Language Processing (NLP) tasks such as information retrieval, machine translation, information extraction, question answering systems and others. This paper reports about the development of a NER system for Bengali and Hindi using Support Vector Machine (SVM). We have used the annotated corpora of 122,467 tokens of Bengali and 502,974 tokens of Hindi tagged with the twelve different NE classes, defined as part of the IJCNLP-08 NER Shared Task for South and South East Asian Languages (SSEAL). An appropriate tag conversion routine has been developed in order to convert the data into the forms tagged with the four NE tags, namely Person name, Location name, Organization name and Miscellaneous name. The system makes use of the different contextual information of the words along with the variety of orthographic word-level features that are helpful in predicting the different NE classes. The system has been tested with the gold standard test sets of 35K, and 38K tokens for Bengali, and Hindi, respectively. Evaluation results have demonstrated the overall recall, precision, and f-score values of 85.11%, 81.74%, and 83.39%, respectively, for Bengali and 82.76%, 77.81%, and 80.21%, respectively, for Hindi. Statistical analysis, ANOVA is performed to show that the improvement in the performance with the use of language dependent features is statistically significant over the language independent features for Bengali and Hindi both.


2010 ◽  
Vol 1 ◽  
pp. 26-58 ◽  
Author(s):  
Asif Ekbal ◽  
Sivaji Bandyopadhyay

This paper reports about a multi-engine approach for the development of a Named Entity Recognition (NER) system in Bengali by combining the classifiers such as Maximum Entropy (ME), Conditional Random Field (CRF) and Support Vector Machine (SVM) with the help of weighted voting techniques. The training set consists of approximately 272K wordforms, out of which 150K wordforms have been manually annotated with the four major named entity (NE) tags, namely Person name, Location name, Organization name and Miscellaneous name. An appropriate tag conversion routine has been defined in order to convert the 122K wordforms of the IJCNLP-08 NER Shared Task on South and South East Asian Languages (NERSSEAL)1 data into the desired forms. The individual classifiers make use of the different contextual information of the words along with the variety of features that are helpful to predict the various NE classes. Lexical context patterns, generated from an unlabeled corpus of 3 million wordforms in a semi-automatic way, have been used as the features of the classifiers in order to improve their performance. In addition, we propose a number of techniques to post-process the output of each classifier in order to reduce the errors and to improve the performance further. Finally, we use three weighted voting techniques to combine the individual models. Experimental results show the effectiveness of the proposed multi-engine approach with the overall Recall, Precision and F-Score values of 93.98%, 90.63% and 92.28%, respectively, which shows an improvement of 14.92% in F-Score over the best performing baseline SVM based system and an improvement of 18.36% in F-Score over the least performing baseline ME based system. Comparative evaluation results also show that the proposed system outperforms the three other existing Bengali NER systems.


2020 ◽  
Vol 2 (2) ◽  
pp. 130-139
Author(s):  
Dr. Wang Haoxiang ◽  
Dr. Smys S.

The existing applications that are associated with the internet produce enormous amount of data according to the requirements of diverse circumstances prevailing. This causes multitudes of challenges in examining the data and as well as in the operation of the system that relies on the cloud. To simply process and manage the execution of the tasks properly with respect to time the workflow scheduling was devised in the cloud. To further enhance the process of scheduling the named entity recognition is used. The NER-named entity recognition is an important chore of more general discipline of internet explorer application. Since the NER- problem is highly challenging in cloud paradigm. An innovative frame work termed as the MC-SVM (Multi Class- Support Vector Machine) is laid out in the paper to devise the scheduling of the workflow in the cloud paradigm. The scheduling of the tasks in the cloud delivers an arrangement setting up the work flows with the named entity recognition using the MC-SVM. The algorithm developed enhances the resource allocation process, by performing a simultaneous and dynamic allocation/reallocation of named entities to the resources of the cloud satisfying the demands in the performance and cost. The results observed on validating the proposed algorithm proves the capability of the system to manage the resources in the cloud effectively optimizing the make span and the cost.


Sign in / Sign up

Export Citation Format

Share Document