Does Large Dataset Matter? An Evaluation on the Interpreting Method for Knowledge Tracing
Abstract
Deep learning has become a competitive method to build knowledge tracing (KT) models. Deep learning based knowledge tracing (DLKT) models adopt deep neural network but lack interpretability. The researchers have started working on interpreting the DLKT models by leveraging on methods in explainable artificial intelligence (xAI). However, the previous study was conducted on a relatively small dataset without comprehensive analysis. In this work, we perform the similar interpreting method on the largest public dataset and conduct the comprehensive experiments to fully evaluate its feasibility and effectiveness. The experiment results reveal that the interpreting method is feasible on the large-scale dataset, but its effectiveness declines with the larger size of learners and longer sequences of learner exercise.Downloads
Download data is not yet available.
Downloads
Published
2021-11-22
Conference Proceedings Volume
Section
Articles
How to Cite
Does Large Dataset Matter? An Evaluation on the Interpreting Method for Knowledge Tracing. (2021). International Conference on Computers in Education. https://library.apsce.net/index.php/ICCE/article/view/4123