scholarly journals Abs-Sum-Kan: An Abstractive Text Summarization Technique for an Indian Regional Language by Induction of Tagging Rules

2019 ◽  
Vol 8 (2S3) ◽  
pp. 1028-1036

This paper presents a full abstraction for Indian languages, specifically Kannada, in the context of guided summarization. The proposed process generates the abstractive sum-mary by focusing on a unified presentation model with aspect based Information Extrac-tion (IE) rules and scheme based Templates. TF/IDF rules are used for classification into categories. Lexical analysis (like Parts Of Speech tagging and Named Entity Recognition) reduces prolixity, which leads to robust IE rules. Usage of Templates for sentence genera-tion makes the summaries succinct and information intensive. The IE rules are designed to accommodate the complexities of the considered languages. Later, the system aims to produce a guided summary of domain specific documents. An abstraction scheme is a collection of aspects and associated IE rules. Each abstraction scheme is designed based on a theme or subcategory. An extensive statistical and qualitative evaluation of the summaries generated by the system has been conducted and the results are found to be very promising.

2019 ◽  
Vol 8 (2S8) ◽  
pp. 1225-1233

This paper presents a full abstraction for Indian languages, specifically Kannada, in the context of guided summarization. The proposed process generates the abstractive summary by focusing on a unified presentation model with aspect based Information Extraction (IE) rules and scheme based Templates. TF/IDF rules are used for classification into categories. Lexical analysis (like Parts Of Speech tagging and Named Entity Recognition) reduces prolixity, which leads to robust IE rules. Usage of Templates for sentence generation makes the summaries succinct and information intensive. The IE rules are designed to accommodate the complexities of the considered languages. Later, the system aims to produce a guided summary of domain specific documents. An abstraction scheme is a collection of aspects and associated IE rules. Each abstraction scheme is designed based on a theme or subcategory. An extensive statistical and qualitative evaluation of the summaries generated by the system has been conducted and the results are found to be very promising.


Named Entity Recognition is the process wherein named entities which are designators of a sentence are identified. Designators of a sentence are domain specific. The proposed system identifies named entities in Malayalam language belonging to tourism domain which generally includes names of persons, places, organizations, dates etc. The system uses word, part of speech and lexicalized features to find the probability of a word belonging to a named entity category and to do the appropriate classification. Probability is calculated based on supervised machine learning using word and part of speech features present in a tagged training corpus and using certain rules applied based on lexicalized features.


2020 ◽  
Author(s):  
Usman Naseem ◽  
Matloob Khushi ◽  
Vinay Reddy ◽  
Sakthivel Rajendran ◽  
Imran Razzak ◽  
...  

Abstract Background: In recent years, with the growing amount of biomedical documents, coupled with advancement in natural language processing algorithms, the research on biomedical named entity recognition (BioNER) has increased exponentially. However, BioNER research is challenging as NER in the biomedical domain are: (i) often restricted due to limited amount of training data, (ii) an entity can refer to multiple types and concepts depending on its context and, (iii) heavy reliance on acronyms that are sub-domain specific. Existing BioNER approaches often neglect these issues and directly adopt the state-of-the-art (SOTA) models trained in general corpora which often yields unsatisfactory results. Results: We propose biomedical ALBERT (A Lite Bidirectional Encoder Representations from Transformers for Biomedical Text Mining) - bioALBERT - an effective domain-specific pre-trained language model trained on huge biomedical corpus designed to capture biomedical context-dependent NER. We adopted self-supervised loss function used in ALBERT that targets on modelling inter-sentence coherence to better learn context-dependent representations and incorporated parameter reduction strategies to minimise memory usage and enhance the training time in BioNER. In our experiments, BioALBERT outperformed comparative SOTA BioNER models on eight biomedical NER benchmark datasets with four different entity types. The performance is increased for; (i) disease type corpora by 7.47% (NCBI-disease) and 10.63% (BC5CDR-disease); (ii) drug-chem type corpora by 4.61% (BC5CDR-Chem) and 3.89 (BC4CHEMD); (iii) gene-protein type corpora by 12.25% (BC2GM) and 6.42% (JNLPBA); and (iv) Species type corpora by 6.19% (LINNAEUS) and 23.71% (Species-800) is observed which leads to a state-of-the-art results. Conclusions: The performance of proposed model on four different biomedical entity types shows that our model is robust and generalizable in recognizing biomedical entities in text. We trained four different variants of BioALBERT models which are available for the research community to be used in future research.


Information ◽  
2019 ◽  
Vol 10 (6) ◽  
pp. 186 ◽  
Author(s):  
Ajees A P ◽  
Manju K ◽  
Sumam Mary Idicula

Named Entity Recognition (NER) is the process of identifying the elementary units in a text document and classifying them into predefined categories such as person, location, organization and so forth. NER plays an important role in many Natural Language Processing applications like information retrieval, question answering, machine translation and so forth. Resolving the ambiguities of lexical items involved in a text document is a challenging task. NER in Indian languages is always a complex task due to their morphological richness and agglutinative nature. Even though different solutions were proposed for NER, it is still an unsolved problem. Traditional approaches to Named Entity Recognition were based on the application of hand-crafted features to classical machine learning techniques such as Hidden Markov Model (HMM), Support Vector Machine (SVM), Conditional Random Field (CRF) and so forth. But the introduction of deep learning techniques to the NER problem changed the scenario, where the state of art results have been achieved using deep learning architectures. In this paper, we address the problem of effective word representation for NER in Indian languages by capturing the syntactic, semantic and morphological information. We propose a deep learning based entity extraction system for Indian languages using a novel combined word representation, including character-level, word-level and affix-level embeddings. We have used ‘ARNEKT-IECSIL 2018’ shared data for training and testing. Our results highlight the improvement that we obtained over the existing pre-trained word representations.


Sign in / Sign up

Export Citation Format

Share Document