Title: Towards a Taxonomy for Security Threats on Federated Learning
Telecooperation Lab, Computer Science department, TU-Darmstadt
Interested in Machine Learning
ML Security is a hot emerging topic pervading the research community and top world conferences! Working on this topic would deepen your understanding of ML and allow you to explore a crucial aspect of it, SECURITY/PRIVACY.
Google uses the Federated Learning technique to build machine learning models based on distributed data (e.g., Gboard) [1,2]. Users train the model locally on their data and send only the model updates to the server, which aggregates all updates to optimize the global model. This technique was proposed to protect users' privacy, however, it turned out to be prone to various attacks threatening the model integrity and user data.
1. Conduct an extensive literature review about security threats on federated learning
2. Propose a taxonomy of the threats
(3.) Evaluate and compare threats
- Familiar with Neural Networks
- Programming skills (Python)
Contact: Aidmar Wainakh (firstname.lastname@example.org
Please make your email's subject: [FL Threats Thesis APPLICANT]
 https://ai.googleblog.com/2017/04/feder ... ative.html