Projected Gradient Descent Adversarial Attack and Its Defense on a Fault Diagnosis System


AYAS M. Ş., AYAS S., Djouadi S. M.

45th International Conference on Telecommunications and Signal Processing, Prague, Çek Cumhuriyeti, 13 Temmuz 2022 identifier identifier

  • Yayın Türü: Bildiri / Tam Metin Bildiri
  • Doi Numarası: 10.1109/tsp55681.2022.9851334
  • Basıldığı Şehir: Prague
  • Basıldığı Ülke: Çek Cumhuriyeti
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

Knowledge-based fault diagnosis methods have become more preferred as they do not need precise model and signal patterns required in model-based and signal-based diagnosis methods, respectively. Machine learning (ML) techniques provide notable results on fault diagnosis by mapping information from raw signals to health condition. However, their vulnerabilities against malicious attacks arises as in other industrial application employing ML methods. In this paper, first, a common whitebox adversarial attack called projected gradient descent (PGD) adversarial attack is injected into a deep residual learning (DRL) network model, which decides health condition of a rolling bearing. Then, robustness of the DRL model is analyzed to examine the effect of the implemented adversarial machine learning (AML). After that, adversarial training technique is used to improve the robustness of the DRL model. The experimental results show that it is possible to implement AML with existing methods to force model to misclassification. Even for a quite perturbation, the average classification accuracy of the DRL model is decreased from 99:98% to 61:25%. The results also indicate that the adversarial training technique increases the robustness of the model.