ACM Multimedia 2013 has introduced a new Crowdsourcing for Multimedia area:
The area cuts clean across traditional multimedia areas, touching upon nearly every topic relevant for multimedia. The area casts a wide net to include the full range of research results and novel ideas in multimedia that are made possible by the crowd, i.e., they exploit crowdsourcing principles and techniques.
How did this new area arise? Crowdsourcing's grand debut at ACM Multimedia is CrowdMM http://www.crowdmm.org/. On Octber 29, 2012 the CrowdMM 2012 International ACM Workshop on Crowdsourcing for Multimedia was held in conjunction with ACM Multimedia 2012 in Nara, Japan. The workshop kicked-off with a keynote entitled "PodCastle and Songle: Crowdsourcing-Based Web Services for Spoken Content Retrieval and Active Music Listening" by Masataka Goto of the National Institute of Advanced Industrial Science and Technology (AIST), Japan. These two systems dazzled the audience and gave us a foretaste of the possibilities that the power of the crowd opens for the multimedia community. An interesting day of talks, posters and discussion ensured, culminating in a panel (summarized below).
The organizers of CrowdMM 2012 hope that both the ACM Multimedia area (focused on groundbreaking research results) and also CrowdMM workshop (focused on methodology, exploratory work and on researcher interaction) will provide a solid foundation that allows crowdsourcing for multimedia to grow within the multimedia community to reach its full potential.
"Crowdsourcing for multimedia: At a crossroad or on a superhighway?"
Summary of the Panel Discussion at CrowdMM 2012
What is the potential of crowdsourcing for ACM Multimedia?
We need The Crowd to allow us to build larger, up-to-date dictionaries for multimedia annotation. We also need The Crowd to create ground truth at a large scale.
The combining techniques for active learning and for incentivizing human contributions will contribute to many different specific multimedia problems.
In all cases, both quality control will be important and also making it fun for The Crowd to contribute, e.g., continuing to build entertaining games to collect Crowd contributions. User engagement breeds quality: for example, Songle provides services that are enjoyable to use and attracts good workers naturally.
What are the limitations of crowdsourcing for multimedia?
Data from non-experts is valuable, but for some tasks we need experts. We need methods that will allow us to identify experts, for example, with domain knowledge. The multimedia community can potentially address the problem of having access to "the right crowd", by joining forces to cultivate a community of crowdsourcing workers who deliver high quality annotations for specific multimedia domains.
How would ACM Multimedia be different had crowdsourcing been invented 20 years ago? If crowdsourcing had existed 20 years ago, we would make much more effective use of active learning paradigms, i.e., algorithms that would interactively query human annotators to obtain new labels for certain multimedia items.
Crowdsourcing makes possible large scale multimedia annotations. Even if crowdsourcing existed 20 years ago, we may not have the tools and techniques to deal with large scale data.
The challenge today is to realize the potential of multimedia, both in venturing into new domains for research and also in scaling up our systems to exploit larger amounts of human labeled data for training and also for evaluation.
In short, the panel concluded, it’s up to ACMMM to catch up with the crowd.
A big thank you to my fellow organizers for the work that they put into making CrowdMM 2012 such a success: