Evaluating the Resilience of Graph Neural Network Architectures to Adversarial and Noisy Data in High-Stakes Construction Project Management


TOĞAN V., Mostofi F., Tokdemir O. B.

Journal of Construction Engineering and Management, cilt.152, sa.4, 2026 (SCI-Expanded, Scopus) identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 152 Sayı: 4
  • Basım Tarihi: 2026
  • Doi Numarası: 10.1061/jcemd4.coeng-17244
  • Dergi Adı: Journal of Construction Engineering and Management
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, ICONDA Bibliographic, INSPEC, Public Affairs Index
  • Anahtar Kelimeler: Adversarial attacks, Construction project management, Data noise, Data poisoning, Decision support systems, Graph neural networks (GNN), Machine learning (ML) robustness
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

High-stakes mega-construction projects present a challenging environment for decision-support models, as they are exposed to risks from both deliberate attacks and unintentional errors. These vulnerabilities can degrade model performance, leading to costly decision-making mistakes. Our focus centers on two major classes of adversarial machine learning attacks, analyzing their consequences on predictive accuracy in graph-structured data containing 267,763 activity records. The first class is data poisoning, where the training set is deliberately corrupted through actions like label flipping, assigning random labels, or feature manipulations, which impair the model’s capacity to learn effectively before deployment. The second class is evasion attacks, such as the fast gradient sign method (FGSM), which strategically perturbs input features by leveraging gradient information to mislead the model during inference. We assess the impact of these attacks on predictive accuracy in graph-structured data with 267,763 activity records. GatedGNN outperformed GCN, GAT, and MPNN against poisoning attacks, consistently achieving a >75% F1 score across all data sets, even when subjected to label flipping, the most damaging method. Benchmark models (GAT, GCN, MPNN) experience comparable F1 losses under random labels and feature manipulation, but GatedGNN slightly benefits from mild feature noise due to its gating mechanism. Yet, FGSM at test time critically damages GatedGNN, causing its average F1 to drop from 88% on clean data to 5–15%, whereas GCN, GAT, and MPNN sustain around 55–57%. The findings highlight that robustness is threat-specific: GatedGNN’s gating protects against poisoned messages but generates smooth gradients exploitable by FGSM. Practitioners should combine GatedGNN with input sanitization or adversarial training against live-sensor spoofing, while relying on its superior performance against poisoned historical records. For real-time threats, extra defenses are essential.