Reducing Defense Vulnerabilities in Federated Learning: A Neuron-Centric Approach


Erdol E. S., Erdol H., ÜSTÜBİOĞLU B., ULUTAŞ G., Symeonidis I.

Applied Sciences (Switzerland), vol.15, no.11, 2025 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 15 Issue: 11
  • Publication Date: 2025
  • Doi Number: 10.3390/app15116007
  • Journal Name: Applied Sciences (Switzerland)
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Aerospace Database, Agricultural & Environmental Science Database, Applied Science & Technology Source, Communication Abstracts, INSPEC, Metadex, Directory of Open Access Journals, Civil Engineering Abstracts
  • Keywords: data poisoning, deep learning security, federated learning, model poisoning, poisoning attacks
  • Karadeniz Technical University Affiliated: Yes

Abstract

Federated learning is a distributed machine learning approach where end users train local models with their own data and combine model updates on a reliable server to create a global model. Despite its advantages, this distributed structure is vulnerable to attacks as end users keep their data and training process private. Current defense mechanisms often fail when facing different attack types or high percentages of malicious participants. This paper proposes a new defense algorithm called Neuron-Centric Federated Learning Defense (NC-FLD), a novel approach that dynamically identifies and analyzes the most significant neurons across model layers rather than examining entire gradient spaces. Unlike existing methods that analyze all parameters equally, NC-FLD creates feature vectors from specifically selected neurons that show the highest training impact, then applies dimensionality reduction to enhance their discriminative features. We conduct experiments with various attack scenarios and different malicious participant rates across multiple datasets (CIFAR-10, F-MNIST, and MNIST). Additionally, we perform simulations on the GTSR dataset as a real-world application. Experimental results demonstrate that NC-FLD successfully defends against diverse attack scenarios in both IID and non-IID dataset distributions, maintaining accuracy above 70% with 40% malicious participation, a 5–15% improvement over the state-of-the-art method, showing enhanced robustness across diverse data distributions while effectively mitigating the impacts of both data and model poisoning attacks.