Seite 1 von 1

Bachelor Thesis: Towards a Taxonomy for Security Threats on Federated Learning

Verfasst: 9. Dez 2019 17:20
von Aidmar
Title: Towards a Taxonomy for Security Threats on Federated Learning
Telecooperation Lab, Computer Science department, TU-Darmstadt

Interested in Machine Learning and Security/Privacy?
ML Security is a hot emerging topic pervading the research community and top world conferences! Working on this topic would deepen your understanding of ML and allow you to explore a crucial aspect of it, SECURITY/PRIVACY.

Description
Google uses the Federated Learning technique to build machine learning models based on distributed data (e.g., Gboard) [1,2]. Users train the model locally on their data and send only the model updates to the server, which aggregates all updates to optimize the global model. This technique was proposed to protect users' privacy, however, it turned out to be prone to various attacks threatening the model integrity and user data.

Goals
1. Conduct an extensive literature review about security threats on federated learning
2. Propose a taxonomy of the threats
(3.) Evaluate and compare threats

Skills
- Familiar with Neural Networks
- Programming skills (Python)

Contact: Aidmar Wainakh (wainakh@tk.tu-darmstadt.de)
Please make your email's subject: [FL Threats Thesis APPLICANT]

[1] https://federated.withgoogle.com/
[2] https://ai.googleblog.com/2017/04/feder ... ative.html