Automatic Distractor Generation in Multiple-Choice Questions Using Large Language Models with Expert-Informed Distractor Strategies
Abstract
In recent years, automatic generation of reading-comprehension questions with artificial intelligence has attracted considerable attention. In particular, producing high-quality distractors remains a critical challenge when generating multiple-choice questions (MCQs). Recent studies have increasingly employed large language models (LLMs) to generate distractors for MCQs. However, prior research has relied solely on the implicit, black-box knowledge of LLMs and has seldom exploited human expertise in distractor design. Therefore, in this study, we propose an LLM-based distractor-generation method that explicitly incorporates expert-informed distractor strategies, which represent typical heuristics used by human experts when crafting distractors. Experiments demonstrate that our method produces distractors of higher quality than those generated by previous approaches.Downloads
Download data is not yet available.
Downloads
Published
2025-12-01
Conference Proceedings Volume
Section
Articles
How to Cite
Automatic Distractor Generation in Multiple-Choice
Questions Using Large Language Models with Expert-Informed
Distractor Strategies. (2025). International Conference on Computers in Education. https://library.apsce.net/index.php/ICCE/article/view/5566