Tuesday, October 22, 2013

CrowdMM 2013: Crowdsourcing in Mutlimedia Emerged or Emerging?

CrowdMM 2013, the 2nd International ACM Workshop on Crowdsourcing for Multimedia, was held in conjunction with ACM Multimedia 2013 on 22 October 2013. This workshop is the second edition of the CrowdMM series, which I have previously written about here. This year it was organized by Wei-Ta Chu (National Chung Cheng University in Taiwan) and Kuan-Ta Chen (Academia Sinica in Taiwan) and myself, with critical support from Tobias Hossfeld (University of Wuerzburg) and Wei Tsang Ooi (NUS). The workshop received support from two projects funded by the European Union, Qualinet and CUbRIK.

During the workshop, we had an interesting panel discussion with the topic "Crowdsourcing in Mutlimedia Emerged or Emerging?" The members of the panel were Daniel Gatica-Perez from Idiap (who keynoted the workshop with a talk entitled, "When the Crowd Watches the Crowd: Understanding Impressions in Online Conversational Video"), Tobias Hossfeld (who organized this year's Crowdsourcing for Multimedia Ideas Competition) and Mohammad Soleymani from Imperial College London (together we presented a tutorial on Crowdsourcing for Multimedia Research the day before). The image above was taken of the whiteboard where I attempted to accumulate the main points raised by the audience and the panel members during the discussion. The purpose of this post is to give a summary of these points.

At the end of the panel, the panel together with the audience decided that crowdsourcing for multimedia has not yet reached its full potential, and therefore should be considered "emerging" rather than already "emerged". This conclusion was interesting in light of the fact that the panel discussion revealed many areas in which crowdsourcing represents an extension of previously existing practices, or stands to benefit from established techniques or theoretical frameworks. These factors are arguments that can be marshaled in support of the "emerged" perspective. However, in the end the arguments for "emerging" had the clear upper hand.

Because the ultimate conclusion was "emerging", i.e., that the field is still experiencing development, I decided to summarize the panel discussion not as a series of statements, but rather as a list of questions. Please note that this summary is from my personal perspective and may not exactly represent what was said by the panelists and the audience during the panel. Any discrepancies, I hope, rather than being bothersome, will provide seeds for future discussion.

Summary of the CrowdMM 2013 Panel Discussion
"Crowdsourcing for Multimedia: Emerged or Emerging"

Understanding: Any larger sense of purpose that can be shared in the crowdsourcing ecosystem could be valuable to increase motivation and thereby quality. What else can we do to fight worker alienation? Why don't taskaskers ask the crowdworkers who they are? And vice versa?

Best Practices: There is no magic recipe for crowdsourcing for multimedia. Couldn't the research community be doing more to share task design, code and data? Would that help? Factors that contribute to the success of crowdsourcing are watertight task design (test, test, test, test, test, test the task design before running a large scale experiment), detailed examples or training sessions, inclusion of verification questions, and making workings aware of the larger meaning of their work. Do tasks have to be fun? Certainly, they should run smoothly so that crowdworkers can hit a state of "blissful productivity".

History: Many of the issues of crowdsourcing are the same ones encountered when we carry out experiments in the lab. Do we make full use of the carry over? Crowdsourcing experiments can be validated by corresponding experiments that have been carried out in a lab environment. Do we do this often enough?

Markets: Many of the issues of crowdsourcing are related to economics, and in particular to the way that the laws of supply and demand operate in an economic market. Have we made use of theories and practices from this area?

Diversity: Why is crowdsourcing mostly used for image labeling by the community? What about other applications such as system design and test? What about techniques that combine human and conventional computing in online systems?

Reproducibility: Shouldn't reproducibility be the ultimate goal of crowdsourcing? Are we making the problem to simple in the cases that we struggle with reproducibility? Understanding the input of the crowd as being influenced by multiple dimensions can help us to better design crowdsourcing experiments that are highly replicable.

Reliability: Have we made use of reliability theory? How about test/retest reliability used in psychology?

Uncertainty: Are we dealing with noisier data? Or has crowdsourcing actually allowed us to move to more realistic data? Human consensus is the upper limit of what we can derive from crowdsourcing, but does human consensus not in turn depend on how well the task has been described to crowdworkers? What don't we do more to exploit the whole framework of probability theory?

Gamification: Will it solve the issues of how to effectively and appropriately incentivize the crowd? Should the research community be the ones to push forward gamification (Does any of us realize how many people and resources it takes to make a really successful commercial game?)

Design: Aren't we forgetting about a lot of work that has been done in interaction design?

Education: Can we combine crowdsourcing systems with systems that help people learn skills that are useful in real life? In this way, crowdworkers receive more from the system in exchange for their crowdwork, in addition to just money.

Cats: Labeling cats is not necessarily an easy task. Is a stuffed animal a cat? How about a kitten?