Privacy-preserving online mirror descent for federated learning with single-sided trust
Odeyomi, Olusola T.
MetadataShow full item record
O. T. Odeyomi and G. Zaruba, "Privacy-Preserving Online Mirror Descent for Federated Learning with Single-Sided Trust," 2021 IEEE Symposium Series on Computational Intelligence (SSCI), 2021, pp. 1-7, doi: 10.1109/SSCI50451.2021.9659544.
This paper discusses how clients in a federated learning system can collaborate with privacy guarantee in a fully decentralized setting without a central server. Most existing work includes a central server that aggregates the local updates from the clients and coordinates the training. Thus, the setting in this existing work is prone to communication and computational bottlenecks, especially when large number of clients are involved. Also, most existing federated learning algorithms do not cater for situations where the data distribution is time-varying such as in real-time traffic monitoring. To address these problems, this paper proposes a differentially-private online mirror descent algorithm. To provide additional privacy to the loss gradients of the clients, local differential privacy is introduced. Simulation results are based on a proposed differentially-private exponential gradient algorithm, which is a variant of differentially-private online mirror descent algorithm with entropic regularizer. The simulation shows that all the clients can converge to the global optimal vector over time. The regret bound of the proposed differentially-private exponential gradient algorithm is compared with the regret bounds of some state-of-the-art online federated learning algorithms found in the literature.
Click on the DOI to access this article (may not be free).