Show simple item record

dc.contributor.advisorSalinas Monroy, Sergio A.
dc.contributor.authorPrashar, Aseem
dc.date.accessioned2020-07-16T16:42:11Z
dc.date.available2020-07-16T16:42:11Z
dc.date.issued2020-05
dc.identifier.othert20024s
dc.identifier.urihttps://soar.wichita.edu/handle/10057/18848
dc.descriptionThesis (M.S.)-- Wichita State University, College of Engineering, Dept. of Electrical Engineering and Computer Science
dc.description.abstractDeep neural networks are becoming popular in a variety of fields due to their ability to learn from large-scale data sets. Recently, researchers have proposed distributed learning architectures that allow multiple users to share their data to train deep learning models. Unfortunately, privacy and confidentiality concerns limit the application of this approach, preventing certain organizations such as medical institutions to fully benefit from distributed deep learning. To overcome this challenge, researches have proposed algorithms that only share neural network parameters. This approach allows users to keep their private datasets secret while still having access to the improved deep neural networks trained with the data from all participants. However, existing distributed learning approaches are vulnerable to attacks where a malicious user can use the the shared neural network parameters to recreate the private data from other users. We propose a distributed deep learning algorithm that allows a user to improve its deep-learning model while preserving its privacy from such attacks. Specifically, our approach focuses on protecting the privacy of a single user by limiting the number of times other users can download and upload parameters from the main deep neural network. By doing so, our approach limits ability of the attackers to recreate private data samples from the reference user while maintaining a highly accurate deep neural network. Our approach is flexible and can be adapted to work with any deep neural network architectures. We conduct extensive experiments to verify the proposed approach. We observe that the trained neural network can achieve an accuracy of 95.18%, while protecting the privacy of the reference user by preventing it from sharing both its private data and deep neural network parameters with the server or other users.
dc.format.extentxi, 33 pages
dc.language.isoen_US
dc.publisherWichita State University
dc.rightsCopyright 2020 by Aseem Prashar All Rights Reserved
dc.subject.lcshElectronic dissertations
dc.titlePrivacy-preserving distributed deep learning
dc.typeThesis


Files in this item

Thumbnail

This item appears in the following Collection(s)

  • Master's Theses
    This collection includes Master's theses completed at the Wichita State University Graduate School (Fall 2005 --)

Show simple item record