We used Amazon Mechanical Turk (MTurk) to gather annotations for the video corpus to be used in the Affect Task at the MediaEval 2010 benchmark evaluation. The task involves automatically identifying videos that viewers report to be particularly boring. We wrote the corpus development up and submitted it to the Crowdsourcing for Search Evaluation Workshop at SIGIR 2010. Initially we wondered a bit if the paper was appropriate for the workshop, since we were working on affect and not directly on search. But we were glad that we took the risk and went for it. The paper was accepted and the workshop was great -- right on target with our interests.
Soleymani, M. and Larson, M. Crowdsourcing for Affective Annotation of Video: Development of a Viewer-reported Boredom Corpus. In Proceedings of the SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation.
We also received the Runner Up for the Most Innovative Paper Award, which was sponsored by Microsoft Bing. Thank you! We are already considering how to get the most bang for our Bing bucks. Probably it will flow directly back into MTurk for our next crowdsourcing project.
Friday, July 23, 2010
SIGIR 2010 Crowdsourcing for Search Evaluation Workshop
Labels:
affect,
crowdsourcing,
MediaEval,
SIGIR,
Switzerland,
VideoCLEF