An Efficient and Generic Method for Interpreting Deep Learning based Knowledge Tracing Models

Authors

  • Deliang WANG Author
  • Yu Lu Author
  • Zhi ZHANG Author
  • Penghe CHEN Author

DOI:

https://doi.org/10.58459/icce.2023.945

Abstract

Deep learning-based knowledge tracing (DLKT) models have been regarded as the promising solution to estimate learners’ knowledge states and predict their future performance based on historical exercise records. However, the increasing complexity and diversity make DLKT models still difficult for users, typically including both learners and teachers, to understand models’ estimation results, directly hindering the model’s deployment and application. Previous studies have explored using methods from explainable artificial intelligence (xAI) to interpret DLKT models, but the methods have been limited in their generalizing capability and inefficient interpreting procedures. To address these limitations, we proposed a simple but efficient model-agnostic interpreting method, called Gradient*Input, to explain the predictions made by these models in two datasets. Comprehensive experiments have been conducted on the existing five DLKT models with representative neural network architectures. The experiment results showed that the method was effective in explaining the predictions of DLKT models. Further analysis of the interpreting results revealed that all five DLKT models share a similar rule in predicting learners’ item responses, and the role of skill and temporal information was found and discussed. We also suggested potential avenues for investigating the interpretability of DLKT models.

Downloads

Download data is not yet available.

Downloads

Published

2023-12-04

How to Cite

An Efficient and Generic Method for Interpreting Deep Learning based Knowledge Tracing Models. (2023). International Conference on Computers in Education. https://doi.org/10.58459/icce.2023.945