Investigating Algorithmic Bias in Affect Detectors with Constructed Categories of Student Identity

Authors

  • Nidhi Nasiar University of Pennsylvania Author
  • Clara Belitz University of Illinois Urbana-Champaign Author
  • Haejin Lee University of Illinois at Urbana-Champaign Author
  • Frank Stinar University of Illinois Urbana-Champaign Author
  • Ryan S. Baker Author
  • Jaclyn Ocumpaugh University of Pennsylvania Author
  • Stephen Fancsali Carnegie Learning, Inc. Author
  • Steve Ritter and Nigel Bosch Author

Abstract

Algorithmic bias research often evaluates models in terms of traditional demographic categories (e.g., U.S. Census), but these categories may not capture nuanced, context-dependent identities relevant to learning. This study evaluates four affect detectors (boredom, confusion, engaged concentration, and frustration) developed for an adaptive math learning system. Metrics for algorithmic fairness (AUC, weighted F1, MADD) show subgroup differences across several categories that emerged from a free-response social identity survey (Twenty Statements Test; TST), including both those that mirror demographic categories (i.e., race and gender) as well as novel categories (i.e., Learner Identity, Interpersonal Style, and Sense of Competence). For demographic categories, the confusion detector performs better for boys than for girls and underperforms for West African students. Among novel categories, biases are found related to learner identity (boredom, engaged concentration, and confusion) and interpersonal style (confusion), but not for sense of competence. Results highlight the importance of using contextually grounded social identities to evaluate bias.

Downloads

Download data is not yet available.

Downloads

Published

2025-12-01

How to Cite

Investigating Algorithmic Bias in Affect Detectors with Constructed Categories of Student Identity. (2025). International Conference on Computers in Education. https://library.apsce.net/index.php/ICCE/article/view/5563