scholarly journals 3 A Region-based Approach to the Automated Marking of Short Textual Answers

2011 ◽  
Vol 1 (1) ◽  
pp. 7 ◽  
Author(s):  
Raheel Siddiqi

Automated marking of short textual answers is a challenging task due to the difficulties involved in accurately “understanding” natural language text. However, certain purpose-built Natural Language Processing (NLP) techniques can be used for this purpose. This paper describes an NLP-based approach to automated assessment that extends an earlier approach [1] to enable the automated marking of longer answers as well as answers that are partially correct. In the extended approach, the original Question Answer Language (QAL) is augmented to support the definition of regions of text that are expected to appear in a student’s answer. In order to explain the extensions to QAL, we present worked examples based on real exam questions. The system’s ability to accurately mark longer answer texts is shown to be on a par with that of existing state-of-the-art short-answer marking systems which are not capable of marking such longer texts.  

2012 ◽  
Vol 3 (1) ◽  
pp. 140-143
Author(s):  
Ekta Aggarwal ◽  
Shreeja Nair

Natural Language Processing (NLP) is an area of research and application that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. The paper deals with the concept of database where by the data resources data can be fetched and accessed accordingly with reduced time complexity. The retrieval techniques are pointed out based on the ideas of binary search. A natural language interface refers to words in its own dictionary as well as to the words in the standard dictionary, in order to interpret a query. The main contribution of this investigation is addressing the problem of improving the accuracy of the query translation process by using the information provided by the database schema.  


2019 ◽  
Vol 8 (2) ◽  
pp. 5511-5514

Machine comprehension is a broad research area from Natural Language Processing domain, which deals with making a computerised system understand the given natural language text. Question answering system is one such variant used to find the correct ‘answer’ for a ‘query’ using the supplied ‘context’. Using a sentence instead of the whole context paragraph to determine the ‘answer’ is quite useful in terms of computation as well as accuracy. Sentence selection can, therefore, be considered as a first step to get the answer. This work devises a method for sentence selection that uses cosine similarity and common word count between each sentence of context and question. This removes the extensive training overhead associated with other available approaches, while still giving comparable results. The SQuAD dataset is used for accuracy based performance comparison.


Author(s):  
Vanitha Guda ◽  
SureshKumar Sanampudi

<p>Due to the numerous information needs, retrieval of events from a given natural language text is inevitable. In natural language processing (NLP) perspective, "Events" are situations, occurrences, real-world entities or facts. Extraction of events and arranging them on a timeline is helpful in various NLP application like building the summary of news articles, processing health records, and Question Answering System (QA) systems. This paper presents a framework for identifying the events and times from a given document and representing them using a graph data structure.  As a result, a graph is derived to show event-time relationships in the given text. Events form the nodes in a graph, and edges represent the temporal relations among the nodes. Time of an event occurrence exists in two forms namely qualitative (like before, after, duringetc) and quantitative (exact time points/periods). To build the event-time-event structure quantitative time is normalized to qualitative form. Thus obtained temporal information is used to label the edges among the events. Data set released in the shared task EvTExtract of (Forum for Information Retrieval Extraction) FIRE 2018 conference is identified to evaluate the framework. Precision and recall are used as evaluation metrics to access the performance of the proposed framework with other methods mentioned in state of the art with 85% of accuracy and 90% of precision.</p>


Due to the numerous information needs, retrieval of events from a given natural language text is inevitable. In natural language processing(NLP), "Events" are situations, occurrences, real-world entities or facts. Extraction of events and arranging them on a timeline is helpful in various NLP applications like building the summary of news articles, processing health records, and Question Answering System (QA) systems. This paper presents a framework for identifying the events and times from a given document and representing them using a graph data structure. As a result, a graph is derived to show event-time relationships in the given text. Events form the nodes in a graph, and edges represent the temporal relations among the nodes. Time of an event occurrence exists in two forms namely qualitative (like before, after, during, etc.) and quantitative (exact time points/periods). To build the event-time-event structure quantitative time is normalized to qualitative form. Thus obtained temporal information is used to label the edges among the events. Data set released in the shared task EvTExtract of (Forum for Information Retrieval Extraction) FIRE 2018 conference is identified to evaluate the framework. Precision and recall are used as evaluation metrics to access the performance of the work with other methods mentioned in state of the art with 85% of accuracy and 90% of precision.


2017 ◽  
Vol 58 (2) ◽  
pp. 1
Author(s):  
Waheeb Ahmed ◽  
Babu Anto

An automatic web based Question Answering (QA) system is a valuable tool for improving e-learning and education. Several approaches employ natural language processing technology to understand questions given in natural language text, which is incomplete and error-prone. In addition, instead of extracting exact answer, many approaches simply return hyperlinks to documents containing the answers, which is inconvenient for the students or learners. In this paper we develop technique to detect the type of a question, based on which the proper technique for extracting the answer is used. The system returns only blocks or phrases of data containing the answer rather than full documents. Therefore, we can highly improve the efficiency of Web QA systems for e-learning.


Author(s):  
P. Monisha ◽  
R. Rubanya ◽  
N. Malarvizhi

The overwhelming majority of existing approaches to opinion feature extraction trust mining patterns for one review corpus, ignoring the nontrivial disparities in word spacing characteristics of opinion options across completely different corpora. During this research a unique technique to spot opinion options from on-line reviews by exploiting the distinction in opinion feature statistics across two corpora, one domain-specific corpus (i.e., the given review corpus) and one domain-independent corpus (i.e., the contrasting corpus). The tendency to capture this inequality called domain relevance (DR), characterizes the relevancy of a term to a text assortment. The tendency to extract an inventory of candidate opinion options from the domain review corpus by shaping a group of grammar dependence rules. for every extracted candidate feature, to have a tendency to estimate its intrinsic-domain relevancy (IDR) and extrinsic-domain relevance(EDR) scores on the domain-dependent and domain-independent corpora, severally. Natural language processing (NLP) refers to computer systems that analyze, attempt understand, or produce one or more human languages, such as English, Japanese, Italian, or Russian. Process information contained in natural language text. The input might be text, spoken language, or keyboard input. The field of NLP is primarily concerned with getting computers to perform useful and interesting tasks with human languages. The field of NLP is secondarily concerned with helping us come to a better understanding of human language


2021 ◽  
Vol 39 (3) ◽  
pp. 121-128
Author(s):  
Chulho Kim

Natural language processing (NLP) is a computerized approach to analyzing text that explores how computers can be used to understand and manipulate natural language text or speech to do useful things. In healthcare field, these NLP techniques are applied in a variety of applications, ranging from evaluating the adequacy of treatment, assessing the presence of the acute illness, and the other clinical decision support. After converting text into computer-readable data through the text preprocessing process, an NLP can extract valuable information using the rule-based algorithm, machine learning, and neural network. We can use NLP to distinguish subtypes of stroke or accurately extract critical clinical information such as severity of stroke and prognosis of patients, etc. If these NLP methods are actively utilized in the future, they will be able to make the most of the electronic health records to enable optimal medical judgment.


Author(s):  
Pamela Rogalski ◽  
Eric Mikulin ◽  
Deborah Tihanyi

In 2018, we overheard many CEEA-AGEC members stating that they have "found their people"; this led us to wonder what makes this evolving community unique. Using cultural historical activity theory to view the proceedings of CEEA-ACEG 2004-2018 in comparison with the geographically and intellectually adjacent ASEE, we used both machine-driven (Natural Language Processing, NLP) and human-driven (literature review of the proceedings) methods. Here, we hoped to build on surveys—most recently by Nelson and Brennan (2018)—to understand, beyond what members say about themselves, what makes the CEEA-AGEC community distinct, where it has come from, and where it is going. Engaging in the two methods of data collection quickly diverted our focus from an analysis of the data themselves to the characteristics of the data in terms of cultural historical activity theory. Our preliminary findings point to some unique characteristics of machine- and human-driven results, with the former, as might be expected, focusing on the micro-level (words and language patterns) and the latter on the macro-level (ideas and concepts). NLP generated data within the realms of "community" and "division of labour" while the review of proceedings centred on "subject" and "object"; both found "instruments," although NLP with greater granularity. With this new understanding of the relative strengths of each method, we have a revised framework for addressing our original question.  


2019 ◽  
Vol 53 (2) ◽  
pp. 3-10
Author(s):  
Muthu Kumar Chandrasekaran ◽  
Philipp Mayr

The 4 th joint BIRNDL workshop was held at the 42nd ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019) in Paris, France. BIRNDL 2019 intended to stimulate IR researchers and digital library professionals to elaborate on new approaches in natural language processing, information retrieval, scientometrics, and recommendation techniques that can advance the state-of-the-art in scholarly document understanding, analysis, and retrieval at scale. The workshop incorporated different paper sessions and the 5 th edition of the CL-SciSumm Shared Task.


Author(s):  
Matheus C. Pavan ◽  
Vitor G. Santos ◽  
Alex G. J. Lan ◽  
Joao Martins ◽  
Wesley Ramos Santos ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document