Advances and Challenges in Deep Learning: Efficient, Explainable, and Scalable Intelligent Systems
Contributors
thiyagarajan
Sudhakar K
Keywords
Proceeding
Track
Engineering and Sciences
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
The rapid proliferation of Deep Learning (DL) has significantly transformed the landscape of intelligent systems, enabling breakthroughs in domains such as healthcare, finance, autonomous systems, and natural language processing. Despite achieving remarkable predictive accuracy, the widespread adoption of DL in safety-critical and high-stakes environments remains constrained by a fundamental “trilemma” involving efficiency, explainability, and scalability. In particular, the opaque “black-box” nature of deep neural networks introduces a lack of transparency, leading to reduced trust, limited accountability, and challenges in regulatory compliance. This paper presents a comprehensive survey of recent advancements (2024–2026) in Explainable Artificial Intelligence (XAI), focusing on methods that enhance model interpretability without significantly compromising performance. We develop a structured taxonomy of XAI techniques, including feature attribution, surrogate models, counterfactual explanations, and attention-based mechanisms. Furthermore, we provide mathematical formulations underlying key attribution methods, offering insights into their theoretical foundations. The study also evaluates the integration of XAI into scalable cloud and edge-based architectures, emphasizing resource-efficient deployment. Comparative analysis of state-of-the-art frameworks reveals that hybrid XAI approaches can improve interpretability by up to 40% while preserving near real-time inference capabilities, thereby bridging the gap between model performance and practical usability.