Use of crowdsourcing in evaluating post-classification accuracy

Creative Commons License


EUROPEAN JOURNAL OF REMOTE SENSING, vol.52, pp.137-147, 2019 (SCI-Expanded) identifier identifier


"Crowdsourcing" uses masses of people to solve a specific problem, usually focusing on research strategies to reduce time, cost and effort to create data. Crowdsourcing intrinsically claims that groups can make relatively smarter and better decisions than the most intelligent individual among them. We investigated to see if crowdsourcing could be used to collect control points for usage in calculating post-classification accuracy assessments. For this purpose, a test was done using class values of randomly generated 1000 control points. Its goal was to explore the accuracy of a specific class values to be entered by three different users by utilizing majority voting method. While examining 3 data sets containing 1000 points, it could be observed that the class values of only 4 points were entered incorrectly. When the support vector machine classification results were evaluated, using the same 1000 control points generated by the experts and the crowdsourcing (containing 4 faulty points), the classification accuracies were found to be 85.0487% and 84.6154%, respectively. Results show that crowdsourcing offers a quicker and more reliable post-classification accuracy assessment for high-spatial resolution multispectral images.