Advances and Challenges in Deep Learning: Efficient, Explainable, and Scalable Intelligent Systems


Date Published : 21 April 2026

Contributors

thiyagarajan

Author

Sudhakar K

Nitte Meenakshi Institute of Technology (NMIT), Nitte (Deemed-to-be University),
Author

Keywords

Explainable AI (XAI) Deep Learning Scalable Systems Edge Computing Model Transparency.

Proceeding

Track

Engineering and Sciences

License

Copyright (c) 2026 Sustainable Global Societies Initiative

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Abstract

The rapid proliferation of Deep Learning (DL) has significantly transformed the landscape of intelligent systems, enabling breakthroughs in domains such as healthcare, finance, autonomous systems, and natural language processing. Despite achieving remarkable predictive accuracy, the widespread adoption of DL in safety-critical and high-stakes environments remains constrained by a fundamental “trilemma” involving efficiency, explainability, and scalability. In particular, the opaque “black-box” nature of deep neural networks introduces a lack of transparency, leading to reduced trust, limited accountability, and challenges in regulatory compliance. This paper presents a comprehensive survey of recent advancements (2024–2026) in Explainable Artificial Intelligence (XAI), focusing on methods that enhance model interpretability without significantly compromising performance. We develop a structured taxonomy of XAI techniques, including feature attribution, surrogate models, counterfactual explanations, and attention-based mechanisms. Furthermore, we provide mathematical formulations underlying key attribution methods, offering insights into their theoretical foundations. The study also evaluates the integration of XAI into scalable cloud and edge-based architectures, emphasizing resource-efficient deployment. Comparative analysis of state-of-the-art frameworks reveals that hybrid XAI approaches can improve interpretability by up to 40% while preserving near real-time inference capabilities, thereby bridging the gap between model performance and practical usability.

References

No References

Downloads

How to Cite

p, T., & Sudhakar K, S. K. (2026). Advances and Challenges in Deep Learning: Efficient, Explainable, and Scalable Intelligent Systems . Sustainable Global Societies Initiative, 1(3). https://vectmag.com/sgsi/paper/view/516