A Video-Based Mobile Palmprint Dataset and an Illumination-Robust Deep Learning Architecture for Unconstrained Environments


Koşmaz Sünnetci B., BİNGÖL Ö., GEDİKLİ E., EKİNCİ M., Doğan R. Ö., Türk S., ...Daha Fazla

Applied Sciences (Switzerland), cilt.15, sa.21, 2025 (SCI-Expanded, Scopus) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 15 Sayı: 21
  • Basım Tarihi: 2025
  • Doi Numarası: 10.3390/app152111368
  • Dergi Adı: Applied Sciences (Switzerland)
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC, Directory of Open Access Journals
  • Anahtar Kelimeler: illumination-invariant deep learning, in-the-wild biometric dataset, mobile palmprint recognition
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

Featured Application: The proposed framework facilitates secure and user-friendly biometric authentication on mobile platforms, particularly in mobile payment and access control scenarios, under unconstrained conditions. The widespread adoption of mobile devices has made secure and user-friendly biometric authentication critical. However, widely used modalities such as fingerprint and facial recognition show limited robustness under uncontrolled illumination and on heterogeneous devices. In contrast, palmprint recognition offers strong potential because of its rich textural patterns and high discriminative power. This study addresses the limitations of laboratory-based datasets that fail to capture real-world challenges. We introduce MPW-180, a novel dataset comprising videos of 180 participants recorded on their own smartphones in everyday environments. By systematically incorporating diverse illumination conditions (with and without flash) and natural free-hand movements, MPW-180 is the first dataset to adopt a bring-your-own-device paradigm, providing a realistic benchmark for evaluating generalization in mobile biometric models. In addition, we propose PalmWildNet, an SE-block-enhanced deep learning architecture trained with Triplet Loss and a cross-illumination sampling strategy. The experimental results show that conventional methods suffer over 50% performance degradation under cross-illumination conditions. In contrast, our method reduces the Equal Error Rate to 1–2% while maintaining an accuracy above 97%. These findings demonstrate that the proposed framework not only tolerates illumination variability but also learns robust illumination-invariant representations, making it well-suited for mobile biometric authentication.