scholarly journals Question Difficulty Estimation Based on Attention Model for Question Answering

2021 ◽  
Vol 11 (24) ◽  
pp. 12023
Author(s):  
Hyun-Je Song ◽  
Su-Hwan Yoon ◽  
Seong-Bae Park

This paper addresses a question difficulty estimation of which goal is to estimate the difficulty level of a given question in question-answering (QA) tasks. Since a question in the tasks is composed of a questionary sentence and a set of information components such as a description and candidate answers, it is important to model the relationship among the information components to estimate the difficulty level of the question. However, existing approaches to this task modeled a simple relationship such as a relationship between a questionary sentence and a description, but such simple relationships are insufficient to predict the difficulty level accurately. Therefore, this paper proposes an attention-based model to consider the complicated relationship among the information components. The proposed model first represents bi-directional relationships between a questionary sentence and each information component using a dual multi-head co-attention, since the questionary sentence is a key factor in the QA questions and it affects and is affected by information components. Then, the proposed model considers inter-information relationship over the bi-directional representations through a self-attention model. The inter-information relationship helps predict the difficulty of the questions accurately which require reasoning over multiple kinds of information components. The experimental results from three well-known and real-world QA data sets prove that the proposed model outperforms the previous state-of-the-art and pre-trained language model baselines. It is also shown that the proposed model is robust against the increase of the number of information components.

Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 533
Author(s):  
Qin Zhao ◽  
Chenguang Hou ◽  
Changjian Liu ◽  
Peng Zhang ◽  
Ruifeng Xu

Quantum-inspired language models have been introduced to Information Retrieval due to their transparency and interpretability. While exciting progresses have been made, current studies mainly investigate the relationship between density matrices of difference sentence subspaces of a semantic Hilbert space. The Hilbert space as a whole which has a unique density matrix is lack of exploration. In this paper, we propose a novel Quantum Expectation Value based Language Model (QEV-LM). A unique shared density matrix is constructed for the Semantic Hilbert Space. Words and sentences are viewed as different observables in this quantum model. Under this background, a matching score describing the similarity between a question-answer pair is naturally explained as the quantum expectation value of a joint question-answer observable. In addition to the theoretical soundness, experiment results on the TREC-QA and WIKIQA datasets demonstrate the computational efficiency of our proposed model with excellent performance and low time consumption.


2022 ◽  
Vol 11 (2) ◽  
pp. 0-0

In the recent times transfer learning models have known to exhibited good results in the area of text classification for question-answering, summarization, next word prediction but these learning models have not been extensively used for the problem of hate speech detection yet. We anticipate that these networks may give better results in another task of text classification i.e. hate speech detection. This paper introduces a novel method of hate speech detection based on the concept of attention networks using the BERT attention model. We have conducted exhaustive experiments and evaluation over publicly available datasets using various evaluation metrics (precision, recall and F1 score). We show that our model outperforms all the state-of-the-art methods by almost 4%. We have also discussed in detail the technical challenges faced during the implementation of the proposed model.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


Author(s):  
Seema Rani ◽  
Avadhesh Kumar ◽  
Naresh Kumar

Background: Duplicate content often corrupts the filtering mechanism in online question answering. Moreover, as users are usually more comfortable conversing in their native language questions, transliteration adds to the challenges in detecting duplicate questions. This compromises with the response time and increases the answer overload. Thus, it has now become crucial to build clever, intelligent and semantic filters which semantically match linguistically disparate questions. Objective: Most of the research on duplicate question detection has been done on mono-lingual, majorly English Q&A platforms. The aim is to build a model which extends the cognitive capabilities of machines to interpret, comprehend and learn features for semantic matching in transliterated bi-lingual Hinglish (Hindi + English) data acquired from different Q&A platforms. Method: In the proposed DQDHinglish (Duplicate Question Detection) Model, firstly language transformation (transliteration & translation) is done to convert the bi-lingual transliterated question into a mono-lingual English only text. Next a hybrid of Siamese neural network containing two identical Long-term-Short-memory (LSTM) models and Multi-layer perceptron network is proposed to detect semantically similar question pairs. Manhattan distance function is used as the similarity measure. Result: A dataset was prepared by scrapping 100 question pairs from various social media platforms, such as Quora and TripAdvisor. The performance of the proposed model on the basis of accuracy and F-score. The proposed DQDHinglish achieves a validation accuracy of 82.40%. Conclusion: A deep neural model was introduced to find semantic match between English question and a Hinglish (Hindi + English) question such that similar intent questions can be combined to enable fast and efficient information processing and delivery. A dataset was created and the proposed model was evaluated on the basis of performance accuracy. To the best of our knowledge, this work is the first reported study on transliterated Hinglish semantic question matching.


2020 ◽  
Vol 70 (4) ◽  
pp. 953-978
Author(s):  
Mustafa Ç. Korkmaz ◽  
G. G. Hamedani

AbstractThis paper proposes a new extended Lindley distribution, which has a more flexible density and hazard rate shapes than the Lindley and Power Lindley distributions, based on the mixture distribution structure in order to model with new distribution characteristics real data phenomena. Its some distributional properties such as the shapes, moments, quantile function, Bonferonni and Lorenz curves, mean deviations and order statistics have been obtained. Characterizations based on two truncated moments, conditional expectation as well as in terms of the hazard function are presented. Different estimation procedures have been employed to estimate the unknown parameters and their performances are compared via Monte Carlo simulations. The flexibility and importance of the proposed model are illustrated by two real data sets.


2021 ◽  
Vol 15 (6) ◽  
pp. 1-22
Author(s):  
Yashen Wang ◽  
Huanhuan Zhang ◽  
Zhirun Liu ◽  
Qiang Zhou

For guiding natural language generation, many semantic-driven methods have been proposed. While clearly improving the performance of the end-to-end training task, these existing semantic-driven methods still have clear limitations: for example, (i) they only utilize shallow semantic signals (e.g., from topic models) with only a single stochastic hidden layer in their data generation process, which suffer easily from noise (especially adapted for short-text etc.) and lack of interpretation; (ii) they ignore the sentence order and document context, as they treat each document as a bag of sentences, and fail to capture the long-distance dependencies and global semantic meaning of a document. To overcome these problems, we propose a novel semantic-driven language modeling framework, which is a method to learn a Hierarchical Language Model and a Recurrent Conceptualization-enhanced Gamma Belief Network, simultaneously. For scalable inference, we develop the auto-encoding Variational Recurrent Inference, allowing efficient end-to-end training and simultaneously capturing global semantics from a text corpus. Especially, this article introduces concept information derived from high-quality lexical knowledge graph Probase, which leverages strong interpretability and anti-nose capability for the proposed model. Moreover, the proposed model captures not only intra-sentence word dependencies, but also temporal transitions between sentences and inter-sentence concept dependence. Experiments conducted on several NLP tasks validate the superiority of the proposed approach, which could effectively infer meaningful hierarchical concept structure of document and hierarchical multi-scale structures of sequences, even compared with latest state-of-the-art Transformer-based models.


2014 ◽  
Vol 2014 ◽  
pp. 1-7 ◽  
Author(s):  
Dinesh Verma ◽  
Shishir Kumar

Nowadays, software developers are facing challenges in minimizing the number of defects during the software development. Using defect density parameter, developers can identify the possibilities of improvements in the product. Since the total number of defects depends on module size, so there is need to calculate the optimal size of the module to minimize the defect density. In this paper, an improved model has been formulated that indicates the relationship between defect density and variable size of modules. This relationship could be used for optimization of overall defect density using an effective distribution of modules sizes. Three available data sets related to concern aspect have been examined with the proposed model by taking the distinct values of variables and parameter by putting some constraint on parameters. Curve fitting method has been used to obtain the size of module with minimum defect density. Goodness of fit measures has been performed to validate the proposed model for data sets. The defect density can be optimized by effective distribution of size of modules. The larger modules can be broken into smaller modules and smaller modules can be merged to minimize the overall defect density.


2017 ◽  
Vol 2017 ◽  
pp. 1-10 ◽  
Author(s):  
Wen-Jun Li ◽  
Qiang Dong ◽  
Yan Fu

As the rapid development of mobile Internet and smart devices, more and more online content providers begin to collect the preferences of their customers through various apps on mobile devices. These preferences could be largely reflected by the ratings on the online items with explicit scores. Both of positive and negative ratings are helpful for recommender systems to provide relevant items to a target user. Based on the empirical analysis of three real-world movie-rating data sets, we observe that users’ rating criterions change over time, and past positive and negative ratings have different influences on users’ future preferences. Given this, we propose a recommendation model on a session-based temporal graph, considering the difference of long- and short-term preferences, and the different temporal effect of positive and negative ratings. The extensive experiment results validate the significant accuracy improvement of our proposed model compared with the state-of-the-art methods.


Author(s):  
Hassan Gharayagh Zandi ◽  
Sahar Zarei ◽  
Mohammad Ali Besharat ◽  
Davoud Houminiyan sharif abadi ◽  
Ahmad Bagher Zadeh

Coaching has often been viewed as a context within which coaches operate to largely bring about changes in athlete’s performance and flourishing. One key factor to successful outcomes in coaching is the quality of the relationship between coaches and athletes. The coach–athlete relationship is at the heart of coaching; however, limited studies have been conducted on its antecedents. The aim of this study was to investigate the relationship between coaches’ forgiveness and perceived relationship quality toward their athletes through verifying the mediating role of interpersonal behaviors of coaches. A total of 270 Iranian coaches participated in the survey, and the data sets were analyzed using structural equation modeling. Results revealed that forgiveness positively predicted the coaches’ perceived relationship quality with their athletes, and this pathway was mediated by the coaches’ interpersonal behaviors.


2020 ◽  
Vol 12 (11) ◽  
pp. 1746
Author(s):  
Salman Ahmadi ◽  
Saeid Homayouni

In this paper, we propose a novel approach based on the active contours model for change detection from synthetic aperture radar (SAR) images. In order to increase the accuracy of the proposed approach, a new operator was introduced to generate a difference image from the before and after change images. Then, a new model of active contours was developed for accurately detecting changed regions from the difference image. The proposed model extracts the changed areas as a target feature from the difference image based on training data from changed and unchanged regions. In this research, we used the Otsu histogram thresholding method to produce the training data automatically. In addition, the training data were updated in the process of minimizing the energy function of the model. To evaluate the accuracy of the model, we applied the proposed method to three benchmark SAR data sets. The proposed model obtains 84.65%, 87.07%, and 96.26% of the Kappa coefficient for Yellow River Estuary, Bern, and Ottawa sample data sets, respectively. These results demonstrated the effectiveness of the proposed approach compared to other methods. Another advantage of the proposed model is its high speed in comparison to the conventional methods.


Sign in / Sign up

Export Citation Format

Share Document