Integrating Explainable Artificial Intelligence in Active Video Watching
DOI:
https://doi.org/10.58459/icce.2023.4781Abstract
The use of videos in learning has increased over the past years. Along with the popularity of video-based learning is the surge in the interest in Artificial Intelligence in education. Previous studies explored the use of Artificial Intelligence technologies in Active Video Watching, a form of video-based learning. A particular case of Artificial Intelligence in Active Video Watching would be Active Video Watching (AVW)-Space, a video-based learning platform developed by the University of Canterbury. The use of AI in AVW-Space, for example, in assessing the quality of comments made by users, has resulted in an increase in student engagement and learning. Student feedback in recent surveys on the use of Active Video Watching showed an interest in explanations of how the system's AI makes decisions. A way to integrate explanations to the system is through Explainable Artificial Intelligence (XAI). Therefore, this research aims to provide additional insights into the use of XAI and explanations in education and professional training through active video watching. This research also aims to explore the potential of XAI as a way to increase user engagement and learning when using AI-supported features of active video watching systems. A second goal is to look at currently implemented AI / ML models used in active video watching and identify potential points of improvement in the AI / ML models used in active video watching.