Privacy-Preserving Federated Cross-Modal Fusion for Scientific Imaging: Medical, Materials, and Generic Domain Validation
Contributors
Dr Janarthanam S
Keywords
Proceeding
Track
Engineering, Sciences, Mathematics & Computations
License
Copyright (c) 2026 Sustainable Global Societies Initiative

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Abstract
Combining Cross-modal scientific imaging data from various imaging modes is essential for scientific progress, whether it's medical scans like CT or MRI, microscopy for materials, or everyday visual data. However, gathering everything in one place often violates privacy rules or ownership rights, limiting access to valuable distributed datasets in regulated fields. To tackle this, we built a federated system for cross-modal fusion that lets multiple sites train models together without swapping raw data. It incorporates differential privacy and safe aggregation to blend features from different modes while keeping control over local data. Tests in medical, materials, and general imaging showed our method hits 94.2% of a centralized model's accuracy with strong privacy (ε=1.0). It also slashes data transfer by 67% using smart compression and manages uneven mode distribution across 15 sites. Plus, shifting knowledge between domains boosted specific tasks by 23% over isolated training. This framework enables privacy-preserving collaborative research in clinical diagnostics, materials defect detection, and distributed scientific data analysis without compromising regulatory compliance or intellectual property protection.