Visual Perception for Multiple Human-Robot Interaction From Motion Behavior


BENLİ E., Motai Y., Rogers J.

IEEE SYSTEMS JOURNAL, cilt.14, sa.2, ss.2937-2948, 2020 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 14 Sayı: 2
  • Basım Tarihi: 2020
  • Doi Numarası: 10.1109/jsyst.2019.2958747
  • Dergi Adı: IEEE SYSTEMS JOURNAL
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Compendex, INSPEC
  • Sayfa Sayıları: ss.2937-2948
  • Anahtar Kelimeler: Kinematics, Robot sensing systems, Visual perception, Legged locomotion, Head, Command cognition, human motion analysis, multiple human targets, multiple robots, omnidirectional (O-D) camera, robotic perception, target identification, thermal vision, visual perception, walking behavior, RECOGNITION, TRACKING, ACQUISITION, ALGORITHM, VISION, SYSTEM
  • Karadeniz Teknik Üniversitesi Adresli: Hayır

Özet

Visual perception is an important component for human-robot interaction processes in robotic systems. Interaction between humans and robots depends on the reliability of the robotic vision systems. The variation of camera sensors and the capability of these sensors to detect many types of sensory inputs improve the visual perception. The analysis of activities, motions, skills, and behaviors of humans and robots have been addressed by utilizing the heat signatures of the human body. The human motion behavior is analyzed by body movement kinematics, and the trajectory of the target is used to identify the objects and the human target in the omnidirectional (O-D) thermal images. The process of human target identification and gesture recognition by traditional sensors have problem for multitarget scenarios since these sensors may not keep all targets in their narrow field of view (FOV) at the same time. O-D thermal view increases the robots' line-of-sights and ability to obtain better perception in the absence of light. The human target is informed of its position, surrounding objects and any other human targets in its proximity so that humans with limited vision or vision disability can be assisted to improve their ability in their environment. The proposed method helps to identify the human targets in a wide FOV and light independent conditions to assist the human target and improve the human-robot and robot-robot interactions. The experimental results show that the identification of the human targets is achieved with a high accuracy.