DeepFuseNet of Omnidirectional Far-Infrared and Visual Stream for Vegetation Detection


Stone D. L., Ravi S., Benli E., Motai Y.

IEEE Transactions on Geoscience and Remote Sensing, cilt.59, ss.9057-9070, 2021 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 59
  • Basım Tarihi: 2021
  • Doi Numarası: 10.1109/tgrs.2020.3044487
  • Dergi Adı: IEEE Transactions on Geoscience and Remote Sensing
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus, Academic Search Premier, PASCAL, Aerospace Database, Applied Science & Technology Source, Aquatic Science & Fisheries Abstracts (ASFA), Business Source Elite, Business Source Premier, CAB Abstracts, Communication Abstracts, Compendex, Computer & Applied Sciences, Geobase, INSPEC, Metadex, Pollution Abstracts, Civil Engineering Abstracts
  • Sayfa Sayıları: ss.9057-9070
  • Anahtar Kelimeler: Visualization, Feature extraction, Robots, Sensors, Vegetation mapping, Cameras, Sensor fusion, Convolutional neural network (CNN), deep learning (DL), object recognition, omnidirectional (O-D) far-infrared (FIR) and visual fusion, semantic extraction, vegetation detection, NEURAL-NETWORK, MOBILE ROBOT, CLASSIFICATION, FUSION, SEGMENTATION, ENVIRONMENT, FOREST
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

IEEEThis article investigates the application of deep learning (DL) to the fusion of omnidirectional (O-D) infrared (IR) sensors and O-D visual sensors to improve the intelligent perception of autonomous robotic systems. Recent techniques primarily focus on O-D and conventional visual sensors for applications in localization, mapping, and tracking. The robotic vision systems have not sufficiently utilized the combination of O-D IR and O-D visual sensors, coupled with DL, for the extraction of vegetation material. We will be showing the contradiction between current approaches and our deep vegetation learning sensor fusion. This article introduces two architectures: 1) the application of two autoencoders feeding into a four-layer convolutional neural network (CNN) and 2) two deep CNN feature extractors feeding a deep CNN fusion network (DeepFuseNet) for the fusion of O-D IR and O-D visual sensors to better address the number of false detects inherent in indices-based spectral decomposition. We compare our DL results to our previous work with normalized difference vegetation index (NDVI) and IR region-based spectral fusion, and to traditional machine learning approaches. This work proves that the fusion of the O-D IR and O-D visual streams utilizing our DeepFuseNet DL approach outperforms both the previous NVDI fused with far-IR region segmentation and traditional machine learning approaches. Experimental results of our method validate a 92% reduction in false detects compared to traditional indices-based detection. This article contributes a novel method for the fusion of O-D visual and O-D IR sensors using two CNN feature extractors feeding into a deep CNN (DeepFuseNet).