Simulation-Driven Neuro-Symbolic Explainable AI for Trustworthy Precision Agriculture: A Review
Contributors
Dr. Syed Ibad Ali
Prof. (Dr.) Shashi Kant Gupta
Keywords
Proceeding
Track
Engineering, Sciences, Mathematics & Computations
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Precision agriculture uses artificial intelligence to support farming decisions, but many existing AI models work like black boxes and do not clearly explain their outputs. Farmers and decision-makers are less inclined to apply it in real life because of this lack of transparency and evidence. This paper discusses simulation-driven techniques to increase the trustworthiness and dependability of precision agricultural systems using explainable AI, neuro-symbolic AI, and blockchain-based verification. The article discusses explainable AI techniques that make it simpler to comprehend how models generate judgments, neuro-symbolic approaches that integrate data-driven learning with farming expert knowledge, and digital twins and modeling settings for experimenting with farming choices. It also examines how data sources, model modifications, and decision records may be verified using blockchain technology. The primary findings demonstrate that modeling-based neuro-symbolic models simplify, solidify, aid in decision validation, and reduce the need for costly field testing. These techniques aid in improving the testing of agricultural strategies prior to implementation. The techniques examined may be used to yield forecasting, disease detection, field tracking, irrigation planning, and sustainable resource management. These techniques contribute to the development of more transparent and reliable AI systems for contemporary farming.