Today, in Torino, Italy, was the day of the Search Computing and Social Media Workshop organized by Chorus+, Glocal and PetaMedia. Being the PetaMedia organizer, I had the honor of opening the workshop with a few words. I tried to set the tone by making the point that information is inherently social, being created by people, for people. Digital media simply extends the reach of information, letting us exchange with others and with ourselves over the constraints of time and space.
The panel at the end of the day looped back around to this idea to discuss the human factor in search computing. We collected points from the workshop participants on pieces of paper to provide the basis for group discussion. I made some notes about how this discussion unrolled. I'm recording them here while they are still fresh in my head.
We started by tackling a big, unsolved issue: Privacy. The point was made that the very reason why social media even exists is that people seem driven in some way to give up their privacy, share things about themselves that no one would know unless they were revealed. Whether or not users do or should compromise their own privacy by sharing personal media was noted to depend on the situation. For some people it's simply, obviously the right thing to do. Concerns were raised about people not knowing the consequences: maybe effectively I am a totally different person five years from now than I am now. But I am still followed by the consequences of today's sharing habits. In the end, the point was made that if the willingness to among users to share stops, we as social media researchers have not much else left to examine.
Next we moved to the question of events in social media: Human's don't agree about what constitutes and event. Wouldn't it just be easier to just adopt as our idea of an event whatever our automatic methods tell us is an event? Effectively we do this anyway. We have no universal definition of an event. There may be some common understanding or conventions within a community that define what an event is. However, these do not necessarily involve widespread consensus: they may be personal and they may evolve with time. For example, the event of "freedom"? Most people agreed that freedom was not an event.
An event is a context. That's it. At the root of things, there are no events. Instead, we use concepts to build from meaning to situational meaning -- to the interpretation of the meaning of the context. Via this interpretation, the impression of event emerges. In the end, meaning is negotiated.
If we say events are nothing, we wouldn't be able to recognize them. Or, does the computer simply play a role in the negotiation game. The systems we build "teach" us their language and we adapt ourselves to their limitations and to the interpretative opportunities that they offer.
Then the question came up about the problems that we choose to tackle as researcher. "Are we hunting turtles because we can't catch hares?" This bothered me a bit, because assuming you can easily catch a turtle, they are quite difficult to kill because of the shell. The hare would be easier. Do our data sets really allow us to tackle "the problem"? The question presupposes that we know what "the problem" is, which may be the same as solving the problem in the first place. Maybe if we can offer the user in a give context enough results that are good enough, they will be able to pick the one that solves "the problem". Perhaps that's all there is to it. Under such an interpretation, the human factor becomes an integral part of the search problem.
In the end, a clear voice with a succinct take home message:
How can we efficiently combine both the human factor and technology approaches?
"The machine can propose and the user can decide."
The discussion ended naturally with a Tim Berners Lee quote, reminding us of the original intent of social effect underlying the Web. We adjourned for some more social networking among ourselves, reassuring ourselves that as long as we were still asking the question we shouldn't expect to find ourselves completely off track.
Wednesday, September 28, 2011
Friday, September 2, 2011
MediaEval 2011: Reflections on community-powered benchmarking
The 2011 season of the MediaEval benchmark culminated with the MediaEval 2011 workshop that was held 1-2 September in Pisa, Italy at Santa Croce in Fossabanda. The workshop was an official satellite event of Interspeech 2011.
For me, it was an amazing experience. So many people worked so hard to organize the tasks, to develop algorithms and also to write their working notes papers and prepare their workshop presentations. I ran around like crazy worrying about logistics details, but every time I stopped for a moment I was immediately caught up in amazement of learning something new. Or of realizing that someone had pushed a step further on an issue where I had been blocked in my own thinking. There's a real sense of traction -- the wheels are connected with the road and we are moving forward.
I make lists of points that are designed to fit on a Power Point slide and to succinctly convey what MediaEval actually is. My most recently version of this slide states that MediaEval is:
At the workshop I attempted to explain it with a bunch of circles drawn on a flip chart (image above). The circles represent people and/or teams in the community. A year of MediaEval consists of a set of relatively autonomous tasks, each with their own organizers. Starting in 2011, we also required that each task have five core participants who commit to crossing the finishing line on the tasks. Effectively, the core participants started playing the role of "sub-organizers", supporting the organizers by doing things like beta testing evaluation scripts.
This set up served to distribute the work and the responsibility over an even wider base of the MediaEval community. Although I do not know exactly how MediaEval works, I have the impression that this distribution is a key factor. I am interested to see how this configuration develops further next year.
MediaEval has the ambitious aim of quantitatively evaluating algorithms that have been developed at different research sites. We would like to determine the most effective methods for approaching multimedia access and retrieval tasks. At the same time, we would like to retain other information about our experience. It is critical that we do not reduce a year of a MediaEval task to a pair (winner, score). Rather, we would like to know which new approaches show promise. We would like to know this independently of whether they are already far enough along in order to show improvement in a quantitative evaluation score. In this way, we hope that our benchmark will encourage and not repress innovation.
I turned from trying to understand MediaEval as a whole to trying to understand what I do. Among all the circles on this flip chart, I am one of the circles. I am a task organizer, a participant (time permitting) and also play a global glue function: coordinating the logistics.
The MediaEval 2012 season kicks-off with one of the largest logistics tasks: collecting people's proposals for new MediaEval tasks, making sure that they include all the necessary information, a good set of sub-question and getting them packed into the MediaEval survey. It is on the basis of this survey that we decide the tasks that will run in the next year. We use the experience, knowledge and preferences of the community in order to select the most interesting, most viable tasks to run in the next year and also to decide on some of the details of their design.
Five years ago, if someone told me I would be editing surveys for the sake of advancing science, I would have said they were crazy. Oh, I guess I also ordered the "mediaeval multimedia benchmark" T-Shirts. That's just what my little circle in the network does.
Let's keep moving forward and find out where our traction lets us go.
For me, it was an amazing experience. So many people worked so hard to organize the tasks, to develop algorithms and also to write their working notes papers and prepare their workshop presentations. I ran around like crazy worrying about logistics details, but every time I stopped for a moment I was immediately caught up in amazement of learning something new. Or of realizing that someone had pushed a step further on an issue where I had been blocked in my own thinking. There's a real sense of traction -- the wheels are connected with the road and we are moving forward.
I make lists of points that are designed to fit on a Power Point slide and to succinctly convey what MediaEval actually is. My most recently version of this slide states that MediaEval is:
- ...a multimedia benchmarking initiative.
- ...evaluates new algorithms for multimedia access and retrieval.
- ...emphasizes the "multi" in multimedia: speech, audio, visual content, tags, users, context.
- ...innovates new tasks and techniques focusing on the human and social aspects of multimedia content.
- ...is open for participation from the research community
At the workshop I attempted to explain it with a bunch of circles drawn on a flip chart (image above). The circles represent people and/or teams in the community. A year of MediaEval consists of a set of relatively autonomous tasks, each with their own organizers. Starting in 2011, we also required that each task have five core participants who commit to crossing the finishing line on the tasks. Effectively, the core participants started playing the role of "sub-organizers", supporting the organizers by doing things like beta testing evaluation scripts.
This set up served to distribute the work and the responsibility over an even wider base of the MediaEval community. Although I do not know exactly how MediaEval works, I have the impression that this distribution is a key factor. I am interested to see how this configuration develops further next year.
MediaEval has the ambitious aim of quantitatively evaluating algorithms that have been developed at different research sites. We would like to determine the most effective methods for approaching multimedia access and retrieval tasks. At the same time, we would like to retain other information about our experience. It is critical that we do not reduce a year of a MediaEval task to a pair (winner, score). Rather, we would like to know which new approaches show promise. We would like to know this independently of whether they are already far enough along in order to show improvement in a quantitative evaluation score. In this way, we hope that our benchmark will encourage and not repress innovation.
I turned from trying to understand MediaEval as a whole to trying to understand what I do. Among all the circles on this flip chart, I am one of the circles. I am a task organizer, a participant (time permitting) and also play a global glue function: coordinating the logistics.
The MediaEval 2012 season kicks-off with one of the largest logistics tasks: collecting people's proposals for new MediaEval tasks, making sure that they include all the necessary information, a good set of sub-question and getting them packed into the MediaEval survey. It is on the basis of this survey that we decide the tasks that will run in the next year. We use the experience, knowledge and preferences of the community in order to select the most interesting, most viable tasks to run in the next year and also to decide on some of the details of their design.
Five years ago, if someone told me I would be editing surveys for the sake of advancing science, I would have said they were crazy. Oh, I guess I also ordered the "mediaeval multimedia benchmark" T-Shirts. That's just what my little circle in the network does.
Let's keep moving forward and find out where our traction lets us go.
Subscribe to:
Posts (Atom)