Facilitating Holistic Evaluations with LLMs: Insights from Scenario-Based Experiments

Authors

  • Toru ISHIDA Department of Computer Science, Hong Kong Baptist University Author
  • Tongxi LIU Department of Educational Studies, Hong Kong Baptist University Author
  • Hailong WANG Department of Computer Science, Hong Kong Baptist University Author
  • William K. CHEUNG Department of Civil Engineering, The University of Tokyo Author

DOI:

https://doi.org/10.58459/icce.2024.4810

Abstract

Workshop courses designed to foster creativity are gaining popularity. However, even experienced faculty teams find it challenging to realize a holistic evaluation that accommodates diverse perspectives. Adequate deliberation is essential to integrate varied assessments, but faculty often lack the time for such exchanges. Deriving an average score without discussion undermines the purpose of a holistic evaluation. Therefore, this paper explores the use of a Large Language Model (LLM) as a facilitator to integrate diverse faculty assessments. Scenario-based experiments were conducted to determine if the LLM could integrate diverse evaluations and explain the underlying pedagogical theories to faculty. The results were noteworthy, showing that the LLM can effectively facilitate faculty discussions. Additionally, the LLM demonstrated the capability to create evaluation criteria by generalizing a single scenario-based experiment, leveraging its already acquired pedagogical domain knowledge.

Downloads

Download data is not yet available.

Downloads

Published

2024-11-25

How to Cite

Facilitating Holistic Evaluations with LLMs: Insights from Scenario-Based Experiments. (2024). International Conference on Computers in Education. https://doi.org/10.58459/icce.2024.4810