Hybrid Energy Optimization in IoT Using Fuzzy Q-Reinforcement Learning for Wireless Sensor Networks


Date Published : 24 April 2026

Contributors

Manju Bargavi

Lincoln University College, Malaysia
Author

A. B. Rajendra

Lincoln University College, Malaysia
Author

Keywords

Wireless Sensor Network; Energy Harvesting; Fuzzy Q-Reinforcement Learning; IoT; Contiki OS; Cooja Simulator; Duty Cycle Optimization

Proceeding

Track

Engineering and Sciences

License

Copyright (c) 2026 Sustainable Global Societies Initiative

Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Abstract

Wireless Sensor Networks (WSNs) are increasingly deployed across diverse real-world applications including environmental monitoring, industrial automation, and smart city infrastructure. However, energy limitation remains a critical bottleneck that constrains network longevity and operational effectiveness, particularly in remote deployment scenarios where battery replacement is infeasible. This paper presents a Fuzzy Q-Reinforcement Learning (FQRL) framework for hybrid energy optimization in IoT-enabled WSNs. The proposed system integrates fuzzy logic-based inference for real-time duty cycle adjustment with Q-learning-based adaptive routing and node ranking to minimize energy consumption while sustaining network performance. The system is implemented using the Contiki OS and Cooja network simulator, incorporating multi-source energy harvesting from solar, wind, and mechanical vibration. Sensor data collected from 10 nodes across 4 sensor modalities (temperature, humidity, wind, and light) yielded 4,000 data points calibrated to environmental conditions of Mysuru, India. Experimental evaluation demonstrates that FQRL achieves a mean energy retention of μ(eᵣ) = 68.2% with a low standard deviation of σ(eᵣ) = 2.10, outperforming conventional Reinforcement Learning (RL) and Adaptive Duty Cycling (ADC) approaches in both stability and efficiency. A comparative study confirms that the FQRL method provides superior duty cycle control, reduced energy waste, and enhanced network lifetime in resource-constrained IoT environments.

References

No References

Downloads

How to Cite

Manju Bargavi , M. B. ., & A. B. Rajendra , A. B. R. . (2026). Hybrid Energy Optimization in IoT Using Fuzzy Q-Reinforcement Learning for Wireless Sensor Networks. Sustainable Global Societies Initiative, 1(3). https://vectmag.com/sgsi/paper/view/503