Automatic evaluation of online learning interaction content using domain concepts
Purpose Interaction content is created during online learning interaction for the exchanged information to convey experience and share knowledge. Prior studies have mainly focused on the quantity of online learning interaction content (OLIC) from the perspective of types or frequency, resulting in a limited analysis of the quality of OLIC. Domain concepts as the highest form of interaction are shown as entities or things that are particularly relevant to the educational domain of an online course. The purpose of this paper is to explore a new method to evaluate the quality of OLIC using domain concepts. Design/methodology/approach This paper proposes a novel approach to automatically evaluate the quality of OLIC regarding relevance, completeness and usefulness. A sample of OLIC corpus is classified and evaluated based on domain concepts and textual features. Findings Experimental results show that random forest classifiers not only outperform logistic regression and support vector machines but also their performance is improved by considering the quality dimensions of relevance and completeness. In addition, domain concepts contribute to improving the performance of evaluating OLIC. Research limitations/implications This paper adopts a limited sample to train the classification models. It has great benefits in monitoring students’ knowledge performance, supporting teachers’ decision-making and even enhancing the efficiency of school management. Originality/value This study extends the research of domain concepts in quality evaluation, especially in the online learning domain. It also has great potential for other domains.