Differentially-private federated learning with long-term constraints using online mirror descent

No Thumbnail Available
Issue Date
2021-07-12
Authors
Odeyomi, Olusola T.
Záruba, Gergely V.
Advisor
Citation

Odeyomi, O., & Zaruba, G. (2021). Differentially-private federated learning with long-term constraints using online mirror descent. Paper presented at the IEEE International Symposium on Information Theory - Proceedings, , 2021-July 1308-1313. doi:10.1109/ISIT45174.2021.9518177

Abstract

This paper discusses a fully decentralized online federated learning setting with long-term constraints. The fully decentralized setting removes communication and computational bottlenecks associated with a central server communicating with a large number of clients. Also, online learning is introduced to the federated learning setting to capture situations with time-varying data distribution. Practical federated learning settings are imposed with long-term constraints such as energy constraints, money cost constraints, time constraints etc. The clients are not obligated to satisfy any per round constraint, but they must satisfy these long-term constraints. To provide privacy to the shared local model updates of the clients, local differential privacy is introduced. An online mirror descent-based algorithm is proposed and its regret bound is obtained. The regret bound is compared with the regret bound of a differentially-private version of online gradient descent algorithm proposed for federated learning.

Table of Content
Description
Click on the DOI link to access this conference paper at the publishers website (may not be free).
publication.page.dc.relation.uri