An efficient human action recognition framework with pose-based spatiotemporal features


Agahian S., Negin F., KÖSE C.

ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, cilt.23, sa.1, ss.196-203, 2020 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 23 Sayı: 1
  • Basım Tarihi: 2020
  • Doi Numarası: 10.1016/j.jestch.2019.04.014
  • Dergi Adı: ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, INSPEC, Directory of Open Access Journals
  • Sayfa Sayıları: ss.196-203
  • Anahtar Kelimeler: Skeleton-based, 3D action recognition, Extreme learning machines, RGB-D, REPRESENTATION, MACHINE
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

In the past two decades, human action recognition has been among the most challenging tasks in the field of computer vision. Recently, extracting accurate and cost-efficient skeleton information became available thanks to the cutting edge deep learning algorithms and low-cost depth sensors. In this paper, we propose a novel framework to recognize human actions using 3D skeleton information. The main components of the framework are pose representation and encoding. Assuming that human actions can be represented by spatiotemporal poses, we define a pose descriptor consisting of three elements. The first element contains the normalized coordinates of the raw skeleton joints information. The second element contains the temporal displacement information relative to a predefined temporal offset and the third element keeps the displacement information pertinent to the previous timestamp in the temporal resolution. The final descriptor of the whole sequence is the concatenation of frame-wise descriptors. To avoid the problems regarding high dimensionality, Principal Component Analysis (PCA) is applied on the descriptors. The resulted descriptors are encoded with Fisher Vector (FV) representation before they get trained with an Extreme Learning Machine (ELM).