Applied Sciences (Switzerland), vol.15, no.11, 2025 (SCI-Expanded)
Federated learning is a distributed machine learning approach where end users train local models with their own data and combine model updates on a reliable server to create a global model. Despite its advantages, this distributed structure is vulnerable to attacks as end users keep their data and training process private. Current defense mechanisms often fail when facing different attack types or high percentages of malicious participants. This paper proposes a new defense algorithm called Neuron-Centric Federated Learning Defense (NC-FLD), a novel approach that dynamically identifies and analyzes the most significant neurons across model layers rather than examining entire gradient spaces. Unlike existing methods that analyze all parameters equally, NC-FLD creates feature vectors from specifically selected neurons that show the highest training impact, then applies dimensionality reduction to enhance their discriminative features. We conduct experiments with various attack scenarios and different malicious participant rates across multiple datasets (CIFAR-10, F-MNIST, and MNIST). Additionally, we perform simulations on the GTSR dataset as a real-world application. Experimental results demonstrate that NC-FLD successfully defends against diverse attack scenarios in both IID and non-IID dataset distributions, maintaining accuracy above 70% with 40% malicious participation, a 5–15% improvement over the state-of-the-art method, showing enhanced robustness across diverse data distributions while effectively mitigating the impacts of both data and model poisoning attacks.