Use of crowdsourcing in evaluating post-classification accuracy


Creative Commons License

SARALIOĞLU E., GÜNGÖR O.

EUROPEAN JOURNAL OF REMOTE SENSING, cilt.52, ss.137-147, 2019 (SCI-Expanded) identifier identifier

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 52
  • Basım Tarihi: 2019
  • Doi Numarası: 10.1080/22797254.2018.1564887
  • Dergi Adı: EUROPEAN JOURNAL OF REMOTE SENSING
  • Derginin Tarandığı İndeksler: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Sayfa Sayıları: ss.137-147
  • Karadeniz Teknik Üniversitesi Adresli: Evet

Özet

"Crowdsourcing" uses masses of people to solve a specific problem, usually focusing on research strategies to reduce time, cost and effort to create data. Crowdsourcing intrinsically claims that groups can make relatively smarter and better decisions than the most intelligent individual among them. We investigated to see if crowdsourcing could be used to collect control points for usage in calculating post-classification accuracy assessments. For this purpose, a test was done using class values of randomly generated 1000 control points. Its goal was to explore the accuracy of a specific class values to be entered by three different users by utilizing majority voting method. While examining 3 data sets containing 1000 points, it could be observed that the class values of only 4 points were entered incorrectly. When the support vector machine classification results were evaluated, using the same 1000 control points generated by the experts and the crowdsourcing (containing 4 faulty points), the classification accuracies were found to be 85.0487% and 84.6154%, respectively. Results show that crowdsourcing offers a quicker and more reliable post-classification accuracy assessment for high-spatial resolution multispectral images.