Federated Learning
A decentralized ML approach where models are trained across multiple devices or servers holding local data, without sharing raw data centrally.
Definition
A privacy-preserving paradigm that sends model updates (gradients) —not raw data— between clients (e.g., smartphones) and a central server, aggregating these updates into a global model. It reduces data-transfer costs and maintains local data sovereignty. Governance must secure update aggregation (to prevent poisoning), manage model versioning, and ensure each client’s contribution is weighted appropriately to avoid bias from under-represented nodes.
Real-World Example
A keyboard-prediction AI learns from users’ typing patterns on their phones via federated learning: each device trains locally on recent text, sends encrypted gradient updates, and the central server aggregates them. No personal text leaves the device, preserving user privacy while improving the global language model.