32nd IEEE Conference on Signal Processing and Communications Applications, SIU 2024, Mersin, Türkiye, 15 - 18 Mayıs 2024
Artificial intelligence training in a single center requires sharing personal data with third parties. In methods such as federated learning, where training is provided without collecting data from the participants, the security of personal data is ensured. However, in Federated learning, since no information is received from the participants about their data, imbalanced data sets may occur and become a target for attacks. Many methods in the literature that defend against security problems in federated learning work on the assumption that participants' data is distributed homogeneously and malicious users attack independently. The Federated Learning Resistant to Byzantine Attacks with Statistical Distribution Around Quantiles (QBARFL) method proposed in this study offers a Federated learning method that is resistant to Byzantine attacks without making any assumptions about the data sets and in all cases where the attacks are organized or independent. The proposed method calculates the Euclidean distances between the current model from the participants and the main model. Using the found Euclidean matrix, it finds the ideal distance with the mean absolute deviation measure (QMAD) based on percentiles. It detects outliers by examining the spread around the ideal distance on the coordinate plane. The results of two different experiments show that QBARFL achieves higher results than similar methods that use quarter gap as a statistical spread measure, even in scenarios with heterogeneous data distribution, organized attacks and a high rate of malicious participants.