Explainable AI in the Real World: Challenges and Opportunities
DOI:
https://doi.org/10.58459/icce.2023.1451Abstract
This paper presents the results of a systematic review of the research papers on the use of explainable AI in the real world. The present body of research indicates there is a huge drive from the academic society in pushing and exploring explainable AI across disciplines from a research perspective, and there is inherent need to design prototypes with increased complexity to tackle the numerous scientific and methodological issues in the process. The main conclusions of the review are that there exist serious methodological issues with the use of XAI in complex systems which reside on vast or layered information systems spanning across multiple organizational units with important data sometimes missing, potentially limiting the validity of the XAI approach used in practice. For XAI to work in the real-world context of education, the approaches to presenting explanations to the stakeholders such as teachers and students should be understandable by them to take appropriate actions or decisions. This would highlight the need to study of human-computer interaction between AI and users that would lead to better transparency, trust and personalization.