Integrated Deep Learning and Explainable AI Framework for Anomaly Detection and Fault Prediction in Cyber-Physical Systems
Contributors
Santoshkumar Vaman Chobe
Weiwei Jiang
Keywords
Proceeding
Track
Engineering and Sciences
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Cyber–Physical Systems (CPS) are crucial in present-day industrial domains like smart manufacturing, healthcare and energy systems. Ensuring their reliability forces advanced techniques for anomaly detection and fault prediction. Still, traditional methods generally tackle those tasks separately and have no interpretability. Our proposal is an integrated framework, combining deep learning with Explainable Artificial Intelligence (XAI) to detect anomalies efficiently and predict faults in Cyber-Physical Systems (CPS). The framework uses an LSTM-based autoencoder to model normal system behavior and detect anomalies based on reconstruction error, while a supervised neural network performs the fault prediction task through Remaining Useful Life (RUL) estimation. SHAP and LIME techniques are included to improve explainability. Experimental evaluation on NASA C-MAPSS dataset has shown an accurate health condition assessment, robust anomaly detection and the improved interpretability of the prediction process as compared to existing approaches. This approach builds a bridge between predictive performance and interpretability, providing a framework that is more applicable to real-world CPS.