Bridging Radiology and Microscopy through AI: A Review on Breast Cancer Diagnosis
Contributors
Himanish Shekhar Das
Subrata Chowdhury
Keywords
Proceeding
Track
Engineering, Sciences, Mathematics & Computations
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Breast cancer remains a leading cause of mortality among women worldwide, and accurate, early diagnosis is critical for effective treatment planning. Traditional diagnostic workflows often rely on isolated imaging modalities—such as mammography, ultrasound, or histopathology—each offering partial insights into tumor morphology and progression. This fragmentation limits diagnostic precision and hampers clinical decision-making. Recent advances in artificial intelligence (AI) have demonstrated immense potential to unify heterogeneous imaging data, enabling multimodal learning systems that bridge radiological and microscopic domains. This review synthesizes current research trends in AI-driven integration of imaging modalities for breast cancer diagnosis, focusing on deep learning architectures, cross-modal feature fusion, and explainable AI frameworks. Significant findings highlight that multimodal AI models consistently outperform unimodal counterparts in diagnostic accuracy, lesion characterization, and prognostic prediction. Moreover, the inclusion of histopathological and radiological correlations enhances interpretability and clinical trust. The review identifies key challenges related to data heterogeneity, standardization, and generalizability across populations. Applications of this integrative approach span computer-aided diagnostics, personalized oncology, and telepathology solutions for low-resource settings. The study concludes that AI-driven multimodal fusion represents a transformative pathway toward comprehensive, explainable, and population-relevant breast cancer diagnostics.