Can We Ensure Accuracy and Explainability for a Math Recommender System?

Authors

  • Yiling DAI Author
  • Brendan FLANAGAN Author
  • Hiroaki OGATA Author

DOI:

https://doi.org/10.58459/icce.2023.951

Abstract

Providing explanations in educational recommender systems are supposed to increase students’ awareness of the recommendations, trust toward the system, motivation to adopt the recommendations. With the expectation to have a higher prediction accuracy, more and more complex recommendation models are developed, which are difficult to explain. It remains debatable that whether there exists a trade-off between the accuracy and explainability of recommender systems. In this study, we focus on the explainable math quiz recommender system--- Naïve Concept Explicit (Naïve CE) proposed in our previous work. We are interested in knowing whether the explainable Naïve CE has a good prediction accuracy compared with a powerful but less explainable model--- Matrix Factorization (MF). We also proposed a combined model CE+MF to preserve the explainability of Naïve CE and predicting power of MF. We then used a long-term quiz answering dataset to evaluate the models’ accuracy as to predicting students’ correctness rate of the quizzes. The results revealed that 1) The explainable model Naïve CE had a lower accuracy than the less model MF given the sparse dataset; 2) Combining two models achieved a moderate accuracy in predicting students’ answers while preserving the explainability of Naïve CE. Our study served as an example of how to develop an inherently explainable educational recommender system and how to improve the accuracy by integrating more complex models.

Downloads

Download data is not yet available.

Downloads

Published

2023-12-04

How to Cite

Can We Ensure Accuracy and Explainability for a Math Recommender System?. (2023). International Conference on Computers in Education. https://doi.org/10.58459/icce.2023.951