Impactful Neuron-based Secure Federated Learning Etkili Nöron Tabanlı Güvenli Federe Öğrenme


Erdöl E. S., Erdöl H., ÜSTÜBİOĞLU B., Solak F. Z., ULUTAŞ G.

32nd IEEE Conference on Signal Processing and Communications Applications, SIU 2024, Mersin, Türkiye, 15 - 18 Mayıs 2024 identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/siu61531.2024.10600747
  • Basıldığı Şehir: Mersin
  • Basıldığı Ülke: Türkiye
  • Anahtar Kelimeler: Byzantine attack, data poisoning, Federated learning, model poisoning, poisoning attack
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

Federated learning is a distributed machine learning approach in which end-user devices update the learning model by training on their local data, rather than on a central server. Each device trains on its own data and the updated model parameters are aggregated on a central server to create a global model. Although this distributed learning structure has its advantages, it is still vulnerable to attacks by malicious actors. Current defenses against such attacks are limited to assumptions about end-user data distribution, and most work in the literature is not feasible to apply on large deep learning networks. Therefore, this article examines attacks and security vulnerabilities against Federated learning. Model poisoning scenarios, which are among the attack types that significantly affect model success, are applied to the learning network. Our proposed method, the Weight Pruning algorithm is used to select impactful neurons in the deep learning network. Then, the feature vectors created with the selected neurons are brought to a size suitable for classification by Principal Component Analysis. Finally, the Isolated Forest unsupervised learning algorithm was used for classification. Our results in defense success have been proven to exceed other approved defense algorithms in the literature.