Evaluating the Effectiveness of Large Language Models for Course Recommendation Tasks
Abstract
Large Language Models (LLMs) have made significant strides in natural language processing and are increasingly being integrated into recommendation systems. However, their potential in educational recommendation systems has yet to be fully explored. This paper investigates the use of LLMs as a general-purpose recommendation model, leveraging their vast knowledge derived from large-scale corpora for course recommendation tasks. We explore prompt and fine-tuning methods for LLM-based course recommendation and compare their performance against traditional recommendation models. Extensive experiments were conducted on a real-world MOOC dataset, evaluating using LLMs as course recommendation systems across a variety of key dimensions. Our results demonstrate that fine-tuned LLMs can achieve good performance comparable to traditional models, highlighting their potential to enhance educational recommendation systems. These findings pave the way for further exploration and development of LLM-based approaches in the context of educational recommendations.Downloads
Download data is not yet available.
Downloads
Published
2025-12-01
Conference Proceedings Volume
Section
Articles
How to Cite
Evaluating the Effectiveness of Large Language Models for
Course Recommendation Tasks. (2025). International Conference on Computers in Education. https://library.apsce.net/index.php/ICCE/article/view/5553