A Human-Centered Explainable AI Framework for Enhancing Trust and Decision Support in Learning Analytics Systems
Contributors
Dr. Ravi Soni
Dr. Sharad Salunke
Keywords
Proceeding
Track
Engineering and Sciences
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Artificial Intelligence has become widely adopted in learning analytics systems for predicting student performance, enabling adaptive learning, and supporting institutional decision-making. Despite improvements in predictive accuracy, many systems remain opaque, difficult to interpret, and insufficiently aligned with human-centered design principles. Insufficient transparency often weakens user trust and can limit effective adoption by both educators and learners. This study presents a systematic review of Explainable Artificial Intelligence and Human-Centered AI research in educational contexts published between 2015 and 2025. Relevant peer-reviewed studies were identified through Web of Science, Scopus, and IEEE Xplore using a PRISMA-guided screening approach. The review identifies critical gaps in the integration of explainability, trust modeling, and decision-support mechanisms within learning analytics systems. Based on thematic synthesis, a conceptual Human-Centered Explainable AI framework is proposed to enhance transparency, usability, and trust in intelligent educational systems. The framework establishes a structured foundation for future empirical validation and system implementation in subsequent stages of the research.