Differentially-private federated learning with long-term constraints using online mirror descent
Authors
Advisors
Issue Date
Type
Keywords
Citation
Abstract
This paper discusses a fully decentralized online federated learning setting with long-term constraints. The fully decentralized setting removes communication and computational bottlenecks associated with a central server communicating with a large number of clients. Also, online learning is introduced to the federated learning setting to capture situations with time-varying data distribution. Practical federated learning settings are imposed with long-term constraints such as energy constraints, money cost constraints, time constraints etc. The clients are not obligated to satisfy any per round constraint, but they must satisfy these long-term constraints. To provide privacy to the shared local model updates of the clients, local differential privacy is introduced. An online mirror descent-based algorithm is proposed and its regret bound is obtained. The regret bound is compared with the regret bound of a differentially-private version of online gradient descent algorithm proposed for federated learning.