Digital Twin–Integrated Deep Reinforcement Learning Framework for Real-Time Payload-Stable UAV Landing and Energy-Aware Descent Control in Industry 5.0
Contributors
R Raj Jawahar
Dr Midhunchakkaravarthy
Dr Rajesh Dey
Keywords
Proceeding
Track
Engineering, Sciences, Mathematics & Computations
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
UAVs which carry payloads are highly unstable during descent as a result of changing center of gravity, asymmetry in aerodynamic properties and external force being highly unpredictable. Traditional controllers like PID and MPC are not very effective in such nonlinear disturbances particularly when the payload is varied throughout the flight. In this study, a Digital Twin-based deep reinforcement learning architecture of real-time payload-stable landing and energy-optimal descent control is suggested. High-fidelity Digital Twin is a model that considers the impact of payload displacement, thrust asymmetry, drag variations, and battery properties up to 1 kHz, allowing a SAC agent to test possible corrective actions before implementing them to the real UAV. Tests using 0.5-1.2kg payloads demonstrate a 38 percent decrease in landing attitude error, 27 percent decrease in descent energy, and 42 percent decrease in lateral drift over an optimized MPC baseline. The twin foresees instability to occur 0.18 s before physical sensors thus permitting proactive stabilization. An explanatory layer comes up with explanations that can be read by humans, which correlates the system with the explainability and human-focused values of Industry 5.0.