Course:LIBR559A/Borromeo, R. M. & Toyama, M. (2016)

From UBC Wiki

Citation

Borromeo, R. M. & Toyama, M. (2016). An investigation of unpaid crowdsourcing. Human-centric computing and information sciences, 6(11). Available at: https://link.springer.com/content/pdf/10.1186%2Fs13673-016-0068-z.pdf

Annotation

The purpose of this article was to explore unpaid crowdsourcing by reviewing applications where the crowd comes from a pool of volunteers, evaluate its performance, compared with paid platforms.

Crowdsourcing is a form of human computation defined as the practice of obtaining information or services by soliciting input from a large number of people via the internet. Unpaid crowdsourcing employs volunteers as workers, with no monetary incentives. Some examples of when unpaid crowdsourcing can be used are:

  • Citizen science: form of research that involves the participation of the public to aid in carrying out scientific research. Volunteers perform tasks such as observation, measurement or computation.
  • Disaster response and relief: crowdsourcing used in disaster response and relief projects. Volunteers help to solve computational problems or even helping in the actual response and relief operations.
  • Traffic management: crowd collect data related to traffic condition, in specific locations.
  • Education: in some cases, crowdsourcing was used in grading or evaluation of student’s requirements.

The authors conducted cases studies, where they deployed crowdsourcing initiatives into paid and unpaid platforms to analyze differences in performance:

  • Case 1: Sentimental analysis: Workers would analyze texts (students comments) to check the sentiments of each message. In the paid platform, it took 2,9 hours and 86 workers to complete the task. To evaluate the performance of the unpaid platform, the authors advertised the project on social media and sent personal email messages to invite people. 46 volunteers participated and completed the task in 44,8 hours. The paid method is faster than the unpaid version and the accuracy of both methods are similar.
  • Case 2: Data extraction: Workers would extract information from a digital repository and create an index for researchers. In the paid platform, it took 2,50 hours and 43 workers to conclude the task. For the unpaid version, the authors invited members of a database research laboratory – total of 21 volunteers – and set the deadline 228 hours after the announcement of the project. This time, the paid version only achieved 27,05% of accuracy, while the unpaid version achieved 89,61%. The difference may be attributed to the characteristics of the crowd, which had an understanding of the projects, language skills and a specific degree of education.

The authors observed that paid crowdsourcing has two steps: deploy the project and wait for responses. And the unpaid crowdsourcing has an important additional step: the gather of the crowd. There are types of tasks that are suitable for unpaid or paid crowdsourcing. There is not the need to offer monetary incentives to workers, but some aspects should be considered, for example, the time and budget constraints.

The novel idea introduced by the article is the comparison of the performance of paid and unpaid crowdsourcing. The contribution of it to the LIS area is that budget constraints affect several libraries and information institutions, and knowing that unpaid crowdsourcing may be a good alternative to seeking for solutions - zero cost and accurate results -, it becomes an alternative for these organizations. For example, archives and museums could benefit from unpaid crowdsourcing, by asking people to identify an object and its context, and would be beneficial for the community to preserve its history and for these institutions, to provide the correct information.

The article presents several initiatives and platforms used in unpaid crowdsourcing, but it lacks in information, where the authors only present superficial information. The authors tend to repeat the same statements several times, for example, the difference between paid and unpaid crowdsourcing, which becomes boring for the reader. Although during class, we saw the negative aspects of crowdsourcing, where companies “explore” people to increase their profits, this article is associated with the positive side of the use of crowdsourcing. It can be for disaster relief, education, development of science, traffic management, and so many other initiatives, for example, museums and archives.

Areas, topics and keywords

Crowdsourcing. Sentimental analysis. Data extraction

Page author Paula Arasaki