After the workshop I received two comments that particularly stuck in my mind. One was that I should have told people that if they contributed to the titanpad notes, I would write a blogpost summarizing them. I was happy that at least someone thought that a blog post would be a good idea. (I hadn't considered that having ones thoughts summarized in my blog would be a motivational factor.) The other comment was that the panel question was not controversial enough to spark a good discussion.
In response to these two comments here now a summary/interpretation of what was recorded in the crowdsourced notes about the panel.
- The biggest challenge of crowdsourcing is to design a product in which crowdsourcing adds value for the user. Crowdsourcing should not be pursued unless it makes a clear contribution.
- The way that the crowd uses a crowdsourcing platform, or a system that integrates crowdsourcing is essential. Here, engagement of the crowd is key, so that they are "in tune" with the goals of the platform, and make a productive contribution.
- The biggest challenge is the principle of KYC. Here, instead of Know Your Client, this is Know Your Crowd. There are many individual and cultural differences between crowdmembers that need to be taken into account.
- The problem facing many systems is not the large amount of data, but that they data is unpredictably structured and in homogenous, making it difficult to ask the crowd to actually do something with it.
- With human contributors in the crowd, people become afraid of collusion attacks that go against the original, or presumed. intent of a platform. A huge space for discussion (which was not pursued during the panel) opens about who has the right to decide what the "right" and "wrong" way to use a platform.
- Crowdwork can be considered people paying with their time: We need to carefully think about what they receive in return.
- With the exception of this last comment, it seemed that most people on the panel found it difficult to say something meaningful about ethics in the short time that was available for the discussion.
At this point, it seems that we are seeing more recommender systems that involve taggers and curators. Amazon Mechanical Turk, of course, came into being as an in-house system to improve product recommendations, cf. Wikipedia. However, it seems that recommender systems that actively leverage the input of the crowd still need to come into their own.
Martha Larson, Domonkos Tikk, and Roberto Turrin. 2015. Overview of ACM RecSys CrowdRec 2015 Workshop: Crowdsourcing and Human Computation for Recommender Systems. In Proceedings of the 9th ACM Conference on Recommender Systems (RecSys '15). ACM, New York, NY, USA, 341-342.