• Login
    View Item 
    •   Shocker Open Access Repository Home
    • Engineering
    • School of Computing
    • SoC Research Publications
    • View Item
    •   Shocker Open Access Repository Home
    • Engineering
    • School of Computing
    • SoC Research Publications
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    Privacy-preserving online mirror descent for federated learning with single-sided trust

    Date
    2021-12-05
    Author
    Odeyomi, Olusola T.
    Zaruba, Gergely
    Metadata
    Show full item record
    Citation
    O. T. Odeyomi and G. Zaruba, "Privacy-Preserving Online Mirror Descent for Federated Learning with Single-Sided Trust," 2021 IEEE Symposium Series on Computational Intelligence (SSCI), 2021, pp. 1-7, doi: 10.1109/SSCI50451.2021.9659544.
    Abstract
    This paper discusses how clients in a federated learning system can collaborate with privacy guarantee in a fully decentralized setting without a central server. Most existing work includes a central server that aggregates the local updates from the clients and coordinates the training. Thus, the setting in this existing work is prone to communication and computational bottlenecks, especially when large number of clients are involved. Also, most existing federated learning algorithms do not cater for situations where the data distribution is time-varying such as in real-time traffic monitoring. To address these problems, this paper proposes a differentially-private online mirror descent algorithm. To provide additional privacy to the loss gradients of the clients, local differential privacy is introduced. Simulation results are based on a proposed differentially-private exponential gradient algorithm, which is a variant of differentially-private online mirror descent algorithm with entropic regularizer. The simulation shows that all the clients can converge to the global optimal vector over time. The regret bound of the proposed differentially-private exponential gradient algorithm is compared with the regret bounds of some state-of-the-art online federated learning algorithms found in the literature.
    Description
    Click on the DOI to access this article (may not be free).
    URI
    https://soar.wichita.edu/handle/10057/23096
    http://doi.org/10.1109/SSCI50451.2021.9659544
    Collections
    • SoC Research Publications

    Browse

    All of Shocker Open Access RepositoryCommunities & CollectionsBy Issue DateAuthorsTitlesSubjectsBy TypeThis CollectionBy Issue DateAuthorsTitlesSubjectsBy Type

    My Account

    LoginRegister

    Statistics

    Most Popular ItemsStatistics by CountryMost Popular Authors

    DSpace software copyright © 2002-2022  DuraSpace
    DSpace Express is a service operated by 
    Atmire NV