This week, I have been answering email from people who have proposed tasks for the MediaEval 2012 multimedia benchmark or have been contacted about becoming a core participant for a task. The emails have been asking for clarification on what exactly a MediaEval task core participant is. Instead of writing the same email back to everyone, I am putting my answer in this blog post.
The only official definition of a core participant is "A participating team that agrees, before the official beginning of MediaEval task sign up, to cross the finish line for a given task come hell or high water". I am actively looking for a paraphrase of this definition, but for the moment, please be tolerant of the use of an idiomatic phrase (and one containing the word "hell" at that.)
How can a core participant team be sure in February that they will complete a task in late summer? Usually, a core participant will be able to reliably predict that they will cross the finish line on a task because have successfully completed similar tasks in the past. In most cases, they will have a specific person or persons on their team who is working on the task as an integral part of their research project.
If you don't think that you can commit to being a core participant, that is absolutely fine. You can sign up for the task with the normal sign up procedure and participate as a general participant. General participants must make a genuine effort to complete the task and we also expect them to attend the MediaEval workshop if at all humanly possible. General participants also have a much longer window in which to consider task participation before deciding to sign up. Regular sign up won't close until the end of April 2012.
Within MediaEval, fine-grained decisions are left as much as possible to the organizers of the individual tasks. The consequences are that, other than the "come hell of high water" definition, it is left up to the task organizers to interpret exactly what a core participant is. For this reason, the detailed discussion of the matter appears on my blog and not on the MediaEval website.
In order to understand what a core participant might do in any given task, here is the story of the history of "core participants" in MediaEval tasks.
We introduced the notion of core participants after the MediaEval 2010 workshop. At the workshop, we had noticed that (not surprisingly) tasks for which there were a lot of results to compare gave us more interesting insight. We wanted to have a way of ensuring that a minimal number of teams would complete any given task, but we didn't want to use a inelegant, alienating solution, such as cancelling tasks which don't receive a minimum number of successful run submissions. For this reason, a task is required to produce evidence that it will have a minimum number of results to compare at the workshop before being officially accepted as a MediaEval task. This evidence takes the form of collecting a list of core participants.
While the season is running, the task organizers might ask the core participants to help out with the task. For example, last year core participants helped debug the evaluation script. Another example is supporting newcomers. Sometimes new participants in MediaEval need help in understanding some details of the instructions. In this case, a task organizer might ask an experienced core participant to communicate with a newbie. Newbies are often students that have just started their MSc or PhD programs and have a lot to think about at once---we don't want them getting stuck on easily answered questions. It might also save them time if they can communicate with someone in a language that they can type more quickly then English. That someone could be a core participant.
There is a clear line between a task organizer and a core participant. A core participant does not have any official responsibility other than finishing the task. Core participants help the task organizers only if they feel inspired to---it's not required. We also assume that core participants are removed enough from the data set creation process that it is impossible for them to have a relative advantage on any given task over a general participant. For this reason, their scores can be included in the official ranking of scores for a task presented at the MediaEval workshop---task organizers are highly encouraged to carry out their own tasks, but their scores are officially excluded in the ranking presented at the workshop.
Distribution of paper citations over time
2 months ago