Hybrid Energy Optimization in IoT Using Fuzzy Q-Reinforcement Learning for Wireless Sensor Networks
Contributors
Manju Bargavi
A. B. Rajendra
Keywords
Proceeding
Track
Engineering and Sciences
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Wireless Sensor Networks (WSNs) are increasingly deployed across diverse real-world applications including environmental monitoring, industrial automation, and smart city infrastructure. However, energy limitation remains a critical bottleneck that constrains network longevity and operational effectiveness, particularly in remote deployment scenarios where battery replacement is infeasible. This paper presents a Fuzzy Q-Reinforcement Learning (FQRL) framework for hybrid energy optimization in IoT-enabled WSNs. The proposed system integrates fuzzy logic-based inference for real-time duty cycle adjustment with Q-learning-based adaptive routing and node ranking to minimize energy consumption while sustaining network performance. The system is implemented using the Contiki OS and Cooja network simulator, incorporating multi-source energy harvesting from solar, wind, and mechanical vibration. Sensor data collected from 10 nodes across 4 sensor modalities (temperature, humidity, wind, and light) yielded 4,000 data points calibrated to environmental conditions of Mysuru, India. Experimental evaluation demonstrates that FQRL achieves a mean energy retention of μ(eᵣ) = 68.2% with a low standard deviation of σ(eᵣ) = 2.10, outperforming conventional Reinforcement Learning (RL) and Adaptive Duty Cycling (ADC) approaches in both stability and efficiency. A comparative study confirms that the FQRL method provides superior duty cycle control, reduced energy waste, and enhanced network lifetime in resource-constrained IoT environments.