A Study on the Performance of a RAG-Augmented Interview Chatbot
Abstract
Along with the increasing importance of communication skills for the jobseekers, many students face challenges in preparing for interviews, both in terms of knowledge, experience, and self-confidence. This research explores the effectiveness of a RAG-augmented chatbot for job interview preparation, combining information retrieval and large language models (LLMs) to generate contextually relevant responses. Two LLM models, Google/gemma-2-2b-it and deepseek-ai/DeepSeek-V3, were compared across three scenarios: True-True (TT), True-False (TF), and False-False (FF). Results show that DeepSeek-V3 outperformed Google/gemma-2-2b-it in generating more accurate and relevant responses, though both models struggled in False-False scenarios. The System Usability Scale (SUS) testing indicated that the chatbot was perceived as easy to use and effective, with scores above the average threshold of 68. However, feedback highlighted concerns regarding response time and the complexity of some features. Overall, the findings suggest that the RAG-based chatbot is a promising tool for interview preparation, though improvements in model performance, system responsiveness, and interface simplicity are recommended for future development.Downloads
Download data is not yet available.
Downloads
Published
2025-12-01
Conference Proceedings Volume
Section
Articles
How to Cite
A Study on the Performance of a RAG-Augmented Interview Chatbot. (2025). International Conference on Computers in Education. https://library.apsce.net/index.php/ICCE/article/view/5710