An Improved Deep Model for Knowledge Tracing and Question-Difficulty Discovery

2021 ◽  
pp. 362-375
Author(s):  
Huan Dai ◽  
Yupei Zhang ◽  
Yue Yun ◽  
Xuequn Shang
Author(s):  
Yunfei Liu ◽  
Yang Yang ◽  
Xianyu Chen ◽  
Jian Shen ◽  
Haifeng Zhang ◽  
...  

Knowledge tracing (KT) defines the task of predicting whether students can correctly answer questions based on their historical response. Although much research has been devoted to exploiting the question information, plentiful advanced information among questions and skills hasn't been well extracted, making it challenging for previous work to perform adequately. In this paper, we demonstrate that large gains on KT can be realized by pre-training embeddings for each question on abundant side information, followed by training deep KT models on the obtained embeddings. To be specific, the side information includes question difficulty and three kinds of relations contained in a bipartite graph between questions and skills. To pre-train the question embeddings, we propose to use product-based neural networks to recover the side information. As a result, adopting the pre-trained embeddings in existing deep KT models significantly outperforms state-of-the-art baselines on three common KT datasets.


Author(s):  
Jie Zhang ◽  
Dongdong Chen ◽  
Jing Liao ◽  
Weiming Zhang ◽  
Huamin Feng ◽  
...  

2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.


Author(s):  
Denghui Zhang ◽  
Yanchi Liu ◽  
Wei Cheng ◽  
Bo Zong ◽  
Jingchao Ni ◽  
...  
Keyword(s):  

Author(s):  
Liang Zhang ◽  
Xiaolu Xiong ◽  
Siyuan Zhao ◽  
Anthony Botelho ◽  
Neil T. Heffernan

Author(s):  
Jinjin Zhao ◽  
Shreyansh Bhatt ◽  
Candace Thille ◽  
Neelesh Gattani ◽  
Dawn Zimmaro

Author(s):  
Chenyang Wang ◽  
Weizhi Ma ◽  
Min Zhang ◽  
Chuancheng Lv ◽  
Fengyuan Wan ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document