Differentially private online learning algorithms for federated optimization
Abstract
The exponential growth of the Internet from the last decade till the present has resulted in
the generation of big data from billions of connected devices. Yet, big data are not readily available
in many application domains. The available data are small and may be unlabeled or lack sufficient
information. Hence, there is a need for collaboration among clients. However, such collaboration
may lead to a breach of privacy or result in the exposure of trade secrets. This has led to the concept
of federated learning, where a federation of clients can collaborate among themselves while
preserving their data. There are still some challenges faced with federated learning. Some of these
challenges are addressed in this dissertation.
This dissertation proposes a fully decentralized federated learning architecture where the
central server is removed. This architecture helps to eliminate congestion that occurs when
multiple clients communicate with the central server. To avoid the reconstruction of shared models
that can expose clients’ data, this dissertation reinforces federated learning with local differential
privacy. All proposed algorithms in this dissertation are based on online mirror descent which has
better efficiency than the stochastic gradient descent-based federated learning algorithms in the
literature. Due to the online nature of the proposed algorithms, they are inherently capable of
coping with time-varying data. This is very useful in real-time monitoring systems. Theoretical
and simulation results show that the proposed algorithms have better performance than existing
federated learning algorithms.
Description
Thesis (Ph.D.)-- Wichita State University, College of Engineering, Dept. of Electrical Engineering & Computer Science