answer quality
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 7)

H-INDEX

10
(FIVE YEARS 1)

2021 ◽  
Vol 11 (16) ◽  
pp. 7681
Author(s):  
Kyoungsoo Bok ◽  
Heesub Song ◽  
Dojin Choi ◽  
Jongtae Lim ◽  
Deukbae Park ◽  
...  

In this paper, we propose a method for recommending experts to appropriately answer questions based on social activity analysis on social media. By analyzing various social activities performed on social media, the user’s interests are identified. Through the human relation analysis of the users of a particular interest field and by considering the response speed and answer quality of the user, we determine the influence of a user. An expert group is matched by analyzing the content of queries by a user and using a hierarchical structure of words. For a user question, the accuracy of an expert recommendation is enhanced by incorporating the question content and sublevel words based on the hierarchical structure of words. Various evaluations have demonstrated that the performance of the proposed method is superior to existing methods.


2020 ◽  
Vol 38 (5/6) ◽  
pp. 1013-1033
Author(s):  
Hei Chia Wang ◽  
Yu Hung Chiang ◽  
Si Ting Lin

Purpose In community question and answer (CQA) services, because of user subjectivity and the limits of knowledge, the distribution of answer quality can vary drastically – from highly related to irrelevant or even spam answers. Previous studies of CQA portals have faced two important issues: answer quality analysis and spam answer filtering. Therefore, the purposes of this study are to filter spam answers in advance using two-phase identification methods and then automatically classify the different types of question and answer (QA) pairs by deep learning. Finally, this study proposes a comprehensive study of answer quality prediction for different types of QA pairs. Design/methodology/approach This study proposes an integrated model with a two-phase identification method that filters spam answers in advance and uses a deep learning method [recurrent convolutional neural network (R-CNN)] to automatically classify various types of questions. Logistic regression (LR) is further applied to examine which answer quality features significantly indicate high-quality answers to different types of questions. Findings There are four prominent findings. (1) This study confirms that conducting spam filtering before an answer quality analysis can reduce the proportion of high-quality answers that are misjudged as spam answers. (2) The experimental results show that answer quality is better when question types are included. (3) The analysis results for different classifiers show that the R-CNN achieves the best macro-F1 scores (74.8%) in the question type classification module. (4) Finally, the experimental results by LR show that author ranking, answer length and common words could significantly impact answer quality for different types of questions. Originality/value The proposed system is simultaneously able to detect spam answers and provide users with quick and efficient retrieval mechanisms for high-quality answers to different types of questions in CQA. Moreover, this study further validates that crucial features exist among the different types of questions that can impact answer quality. Overall, an identification system automatically summarises high-quality answers for each different type of questions from the pool of messy answers in CQA, which can be very useful in helping users make decisions.


2020 ◽  
Vol 72 (6) ◽  
pp. 887-907
Author(s):  
Lei Li ◽  
Chengzhi Zhang ◽  
Daqing He

PurposeWith the growth in popularity of academic social networking sites, evaluating the quality of the academic information they contain has become increasingly important. Users' evaluations of this are based on predefined criteria, with external factors affecting how important these are seen to be. As few studies on these influences exist, this research explores the factors affecting the importance of criteria used for judging high-quality answers on academic social Q&A sites.Design/methodology/approachScholars who had recommended answers on ResearchGate Q&A were asked to complete a questionnaire survey to rate the importance of various criteria for evaluating the quality of these answers. Statistical analysis methods were used to analyze the data from 215 questionnaires to establish the influence of scholars' demographic characteristics, the question types, the discipline and the combination of these factors on the importance of each evaluation criterion.FindingsParticular disciplines and academic positions had a significant impact on the importance ratings of the criteria of relevance, completeness and credibility. Also, some combinations of factors had a significant impact: for example, older scholars tended to view verifiability as more important to the quality of answers to information-seeking questions than to discussion-seeking questions within the LIS and Art disciplines.Originality/valueThis research can help academic social Q&A platforms recommend high-quality answers based on different influencing factors, in order to meet the needs of scholars more effectively.


Community question answering CQA) systems are rapidly gaining attention in the society. Several researchers have actively engaged in improving the theories associated with question answering (QA) systems. This paper reviews the literature reported works on question answering QA systems. In this paper, we discuss on the early contributions on QA systems along with their present and future scope. We have categorized the literature reported works into 20 subgroups according to their significance and relevance. The works in each group will be brought out along with their inter-relevance. Finding the question and answer quality is the prime challenge almost addressed by many researchers. Modeling similar questions, identifying experts in prior and understanding seeker satisfaction also considered as potential challenges. Researchers at the most have done experimentations on popular CQAs like Yahoo! Answers, Wiki Answers, Baidu Knows, Brianly, Quora, Pubmed and Stack Overflow respectively. Machine learning, probabilistic modeling, deep learning and hybrid approach of solving show profound significance in addressing various challenges encounter with QA systems. Today the paradigm of CQA systems took the shift by serving as Open Educational Resources to learning community


2020 ◽  
Vol 66 (1) ◽  
pp. 179-193
Author(s):  
Weifeng Ma ◽  
Jiao Lou ◽  
Caoting Ji ◽  
Laibin Ma

2018 ◽  
Vol 70 (3) ◽  
pp. 269-287 ◽  
Author(s):  
Lei Li ◽  
Daqing He ◽  
Chengzhi Zhang ◽  
Li Geng ◽  
Ke Zhang

Purpose Academic social (question and answer) Q&A sites are now utilised by millions of scholars and researchers for seeking and sharing discipline-specific information. However, little is known about the factors that can affect their votes on the quality of an answer, nor how the discipline might influence these factors. The paper aims to discuss this issue. Design/methodology/approach Using 1,021 answers collected over three disciplines (library and information services, history of art, and astrophysics) in ResearchGate, statistical analysis is performed to identify the characteristics of high-quality academic answers, and comparisons were made across the three disciplines. In particular, two major categories of characteristics of the answer provider and answer content were extracted and examined. Findings The results reveal that high-quality answers on academic social Q&A sites tend to possess two characteristics: first, they are provided by scholars with higher academic reputations (e.g. more followers, etc.); and second, they provide objective information (e.g. longer answer with fewer subjective opinions). However, the impact of these factors varies across disciplines, e.g., objectivity is more favourable in physics than in other disciplines. Originality/value The study is envisioned to help academic Q&A sites to select and recommend high-quality answers across different disciplines, especially in a cold-start scenario where the answer has not received enough judgements from peers.


Sign in / Sign up

Export Citation Format

Share Document