Monday, August 10, 2020

Three Laws of Robotic Language

This post is a draft on which I am currently eliciting feedback. Changes may be made in the future.

Artificial Intelligence that can produce language is improving in leaps and bounds (cf. the recent GPT-3 as reported on, e.g., in The Economist). However, it is still early enough to think seriously about how we should guide the development of language AI in order to maintain influence over the large-scale, long-term effects of automatic language generation. Asimov’s Three Laws of Robotics have inspired AI research towards conscious design choices during the early stages of new AI technologies. Parallel to these laws, this post proposes Three Laws of Robotic Language. We understand robotic language as language (written, spoken, or signed) that was generated partially or entirely by an automatic system. Because such a system can be seen as a machine engaging in a conventionally human activity, we refer to it as a language robot. These laws are intended to support researchers developing AI for natural language generation. The laws are formulated to help lay a solid foundation for what is to come by inspiring careful reflection about what we need to get right from the beginning, and the mistakes we need to avoid.

The Three Laws of Robotic Language

First Law: A language robot must declare its identity.
Second Law: A language robot’s identity must be easy to verify.
Third Law: A language robot’s identity must be difficult to counterfeit.

Practical benefits

Adopting these Three Laws would support desirable practical properties of robotic language as its use becomes more widespread:

  • People (readers, consumers) will be able to identify content as robotic language (as opposed to language produced by other people) without relying on sophisticated technology.
  • People will be able to confirm the source of the content without relying on sophisticated technology.
  • Entities (organizations, companies) that generate high-quality, reliable robotic language can be sure that consumers can recognize and trust their content. 
  • Entities that generate robotic language can more easily ensure that they don’t unwittingly train their language generation systems on previously generated robotic language.
Like the Three Laws of Robotics, these laws depend on adoption by the people and organizations that develop and control technology. For many, the practical properties delivered by the laws will be convincing enough. For others, it will be important to understand the link between these laws and the nature of human language, which is explained next.

Moving robotic language towards human language

Currently, the success of robotic language is judged by its ability to fool a reader into mistaking it for language generated by a human. This criterion seems sensible for judging individual sentences, paragraphs or documents. Adopting this criterion implies that we, effectively, regard human language as the generation and exchange of sequences of words and that we consider the aim of language robots to be approximating these sequences. However, if we look at the larger picture of how people actually use language, we see that language goes beyond word sequences. What interests us here is how language conveys the connection between the creator (i.e., who is speaking or writing) and the language content that they create (i.e., what is spoken or written). The Three Laws of Robotic Languages state that when language robots generate language content, information about the creator must be inextricable from that content. Adding the criterion of creator-content inextricability should not be considered a nice-to-have  functionality that can optionally be added to language robots at some future point. Rather, this feature must be planned from the beginning, before language robots establish themselves as a major source of the language content that we consume.

For some, the idea that the connection between creator and content is an important part of language is surprising. It is not, however, radically new, but rather an observation, perhaps so obvious that it is easily overlooked. Think about speaking to a baby or an animal: they react to the you-ness of your voice, although they might not understand your words. Our voices identify us as individuals. On top of that, when we hear a voice we may not recognize the specific person speaking, but we still hear something about them. Speech is produced by the human body, and is given its form by our mouths and nasal cavities. Our voices identify something about us, e.g., how big we might be. The Three Laws of Robotic Language are, at their root, a proposal to give language robots a “sound of voice” that would carry information about the origin of the language content that they produce. Language robots must identify themselves, or at least reveal enough about themselves so that it is clear (without the need for sophisticated technology) that they are robots.

In order to better grasp why the inextricability of creator from content is a fundamental characteristic of human language, it helps to look back in time. Throughout most of the history of language, speech could not exist independently of a speaker (and sign language could not exist independently of a signer). It was impossible to decouple the words and the source of the words. It is only with the rise of written language that we have the option of breaking the content-creator association, allowing language content to float free of the person who produced it. Most recently, speech synthesis or sign synthesis can also disassociate the speaker from what is spoken. This possibility of content-without-creator now feels so natural to us that it is hard to imagine that it was not originally a property of human language. However, the age of speech-only language was tens of thousands of years (possibly more) longer than the current era of written language. It may seem strange from the perspective of today, but the original state of human language is one of inextricability: speech could not exist without a speaker. 
 
In short, we know that language works well with inextricability: that’s the way in which human language was originally developed and used. For this reason, the Three Laws of Robotic Language should not be considered an unnatural imposition, but rather a gentle requirement that language robots behave in a way that is closer to the original state of language.

An important design choice

It is important to note that when humans use language they creatively manipulate the connection between who is creating and what is created. We imitate others’ voices. We quote other people. We love the places where we can yell and hear our voices echoed back. Once written language introduced the possibility of extricating the creator from language content, we started to take advantage of the option of hiding our identities: we use pen names and we write anonymous messages. The Three Laws of Robotic Languages constrain the ability of language robots to engage in these kinds of activities. For example, the laws prevent them from generating anonymous content or producing imitations that are impossible to detect.

At first consideration, it seems that the Three Laws of Robotic Language represent an unnecessary hindrance or constraint. However, human language is characterized by strong constraints. On further thought, it becomes clear that language robots need to be subject to some form of constraint if they are to interact productively with naturally constrained humans over the long run in large-scale information spaces.
 
The constraints on human language are human mortality and limited physical strength. When we focus on a small, local scale, thinking about individual texts and short periods of time, we risk overlooking these constraints. However, they are there and their effect is important.

First, think about human mortality: A given person can produce and consume only so many words in their lifetime. Our deaths represent a hard limit, and force us to choose, over the course of our existence, what we say and what we don’t say, what we listen to and read, and what we don’t. A language robot needs shockingly little time to generate the same amount of language content that a human would produce (or could consume) in a lifetime.

Second, think about human physical strength. Language is the means by which humans as a species have pooled their physical strength. Language allows us to engage in coordinate action towards a common goal. We use language to convince other people to adopt our opinions or follow our plans. The power of our language to convince is limited by our physical ability to act consistently with our opinions or to contribute to carrying out our plans. People speaking empty words put themselves at risk of ostracization or physical harm. A language robot can generate language that is finely tuned to be convincing, and is unconstrained by the need to follow up words with action. Language robots risk nothing.

Considering again Asimov’s Three Laws of Robotics, human mortality and limited physical strength is what makes the laws necessary in the face of robots with superior strength and stamina. The laws level the playing field, so to say. The Three Laws of Robotic Language serve a similar function. They do not protect humans as directly as Asimov’s laws. However, they make the actions of language robots traceable, which provides a lever that allows humans to maintain influence on the large-scale, long-term impact of robotic language on our information sphere.

At this point, we don't know enough to predict this influence exactly. What is clear, however, is that we need some kind of constraint. It is also clear, as argued above, that the Three Laws of Robotic Language are consistent with a functioning form of human language, which is actually its original form. Further, we know that the laws have some already-obvious advantages. Recall from above the desirable practical properties: inextricability delivers convenience i.e., following the Three Laws of Robotic Language will prevent AI researchers from inadvertently training language robots on automatically generated text, causing feedback loops (resulting, possibly, in systems drifting away from human interpretable syntax and semantics). Further, as we struggle to gain control of malicious bots and disinformation online, it would be helpful if the language robots with honorable intent would declare themselves. Inextricablity would make it easier to build a case against ill-intentioned actors.
 
The Three Laws of Robotic Language are not a silver-bullet solution, but rather a well-informed design choice. Currently, AI researchers have defaulted to the extricability of creator from content. The Three Laws will already be a success if they inspire AI researchers to pause and consider whether inextricablity, rather than extricability should be considered the default choice for systems that automatically generate natural language (text, speech and sign).
 

An example

Let’s consider a language robot that generates text sentences. We will call this language robot  DP-bot, because it declares its identity by upholding the double prime (DP) rule with every sentence that it produces. The language robot can generate the sentence:

           We adore language.

The double prime rule states that a prime number of letters must occur a prime number of times in a sentence. The rule is upheld by this sentence since ‘e’,’a’,’g’ (3 and only 3 letters; 3 being a prime number) each occur in the sentence a prime number of times (3, 3, and 2 times respectively; 2 and 3 being prime numbers).

This sentence expresses the same sentiment:

        We love language.

The sentence, however, does not respect the double prime rule. ‘e’,’a’,’g’ all occur a prime number of times (3, 2, and 2 times respectively), but ‘l’ also occurs 2 times. This means that 4 letters occur a prime number of times (4 not being a prime number).

At first consideration, it may seem that DP-bot is a bit too constrained in the semantics that it can express, since the match in meaning between the two sentences is approximate. However, if sentences get longer, or if the rule is defined to apply at a higher level (e.g., paragraph and not the sentence level), it will be easier to encode semantics into a text that respects the double prime rule without burdensome constraints.

DP-bot upholds the First Law of Robotic Language in that all language content generated by DP-bot respects the double prime rule and is thus identifiable as having been generated by DP-bot. DP-bot upholds the Second Law because it is easy to validate that a sentence respects the double prime rule. The only knowledge that is needed for validation is the natural language sentence that states the double prime rule, i.e., “a prime number of letters must occur a prime number of times in a sentence”. DP-bot does not do very well with the Third Law, since it is easy to create a sentence that respects the double prime rule, thereby counterfeiting DP-bot language. Even manually constructing a sentence that complies to the double prime rule is not difficult. Currently, we are working on formulating rules that are more sophisticated than the double prime rule and that require a large amount of computational power or specialized training data in order to embed them into natural language sentences.

Note that the language robot DP-bot produces text that encodes a mark, but that this mark is not a watermark. Let’s call it an sourcemark, since it marks a language robot as having been the source of the text. A watermark is also a pattern that is embedded into content, like text or an image. Its purpose is to identify ownership. A watermark is designed to be robust to change. For example, if a text is paraphrased or excerpted the mark should still remain. An sourcemark, however, is meant to identify the original text and associate it with a creator (the source). A small change in text might compromise the meaning, e.g., We do not adore language. A creator can no longer claim responsibility for text once it has changed, and should not be identified with the changed text. Unlike a watermark, a sourcemark must disappear when the text has been changed.

Note that the double-prime rule has nothing to do with encryption. Prime numbers are used because they are a relatively small set of numbers that are easy to describe. If the rule can be expressed in a single sentence, “a prime number of letters must occur a prime number of times in a sentence”, then it is easy to confirm the rule without any sophisticated technology, such as a machine learning classifier or a key (with enough patience it can be done without even using a computer). If we used a form of encryption, the ability to verify the identity of a language robot would be restricted to the subset of people who have the appropriate technology (requiring software installation and maintenance, computation, passing of keys).

Following the Three Laws of Robotics means designing language robots that embed sourcemarks in all the content that they generate. Here, we have presented a simple (and not yet completely successful) example of a sourcemark. We expect that any number of sourcemarks could be developed. An interesting overall property is that even if we do not have knowledge of the presence of a sourcemark, carrying out some simple statistics could reveal the difference between marked and unmarked language content. This signal would reflect a “suspected” language robot, and trigger deeper investigation. As further sourcemarks are developed, desirable properties of marks going beyond the Three Laws of Robot Language can be innovated.

Monday, November 11, 2019

Reflections on Discrimination by Data-based Systems

A student wrote to me to ask me to interview me about discrimination in text mining and classification systems. He is working on his bachelor thesis, and plans to concentrate on gender discrimination. I wrote him back with an informal entry into the topic, and posted it here, since it may be of more general interest.

Dear Student,

Discrimination in IR, classification, or text mining systems is caused by the mismatch between what is assumed to be represented by data and what is helpful, healthy and fair for people and society.

Why do we have this mismatch and why is it so hard to fix?

Data is never a perfect snapshot of a person or a person's life. There is no single "correct" interpretation inherent in data. Worse, data creates its own reality. Let's break it down.

Data keeps us stuck in the past. Data-based systems make the assumption that predictions made for use in the future, can be meaningfully based on what has happened in the past. With physical science, we don't mind being stuck in the past. A ballistic trajectory or a chemical reaction can indeed be predicted by historical data. With data science, when we build systems based on data collected from people, shaking off the past is a problem. Past discrimination perpetuates itself, since it gets built into predictions for the future. Skew in how datapoints are collected also gets built into predictions. Those predictions in turn get encoded into the data and the cycle continues.

In short, the expression "it's not rocket science" takes on a whole new interpretation. Data science really is not rocket science, and we should stop expecting it to resemble physical science in its predictive power.

Inequity is exacerbated by information echo chambers. In information environments, we have what is known as rich gets richer effects, i.e., videos with many views gain more views. It means that small initial tendencies are reinforced. Again, the data creates its own reality. There is a difference between data collected in online environments and data collected via a formal poll.

Other important issues:

"Proxy" discrimination: for example, when families move they tend to follow the employment opportunities of the father and not the mother. The trend can be related to the father often earning more because he tends to be just a bit older (more work experience) and also tends to have spent less time on pregnancy and kid care. This means that the mother's CV will be full of non-progressive job changes (i.e., gaps or changes that didn't represent career advancement), and gets down ranked by a job candidate ranking function. The job ranking function generalizes across the board over non-progressive CVs, and does not differentiate between the reasons that the person was not getting promoted. In this case, this non-progressiveness is a proxy for gender, and down-ranking candidates with non-progressive CVs leads to reinforcing gender inequity. Proxy discrimination means that it is not possible to address discrimination by looking at explicit information; implicit information also matters.

Binary gender: When you design a database (or database schema) you need to declare the variable type in advance, and you also want to make database interoperable with other databases. Gender is represented as a binary variable. The notion that gender is binary gets propagated through systems regardless of the ways that people actually map well to two gender classes. I notice a tendency among researchers to assume that gender is some how a super-important variable contributing to their predictions just because it seems easy to collect and encode. We give importance to the data we have, and forget about other, perhaps more relevant data, that are not in our database.

Everyone's impacted: We tend to focus on women when we talk about gender inequity. This is because of the examples of gender inequity that threaten life and limb tend to involve women, such as gender gaps in medical research. Clearly action needs to be taken. However, it is important to remember that everyone is impacted by gender inequity. When a lopsided team designs a product, we should not be surprised when the product itself is also lopsided. As men get more involved in caretaking roles in society, they struggle against pressure to become "Supermom", i.e., fulfill all the stereotypical male roles, and at the same time excel at the female roles. We should be careful while we are fixing one problem, not to fully ignore, or even create, another.

I have put a copy of the book Weapons of Math Destruction in my mailbox for you. You might have read it already, but if not, it is essential reading for your thesis.

From the recommender system community in which I work, check out:

Michael D. Ekstrand, Mucun Tian, Mohammed R. Imran Kazi, Hoda Mehrpouyan, and Daniel Kluver. 2018. Exploring author gender in book rating and recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, New York, NY, USA, 242-250.

and also our own recent work, that has made be question the importance of gender for recommendation. 

Christopher Strucks, Manel Slokom, and Martha Larson, BlurM(or)e: Revisiting Gender Obfuscation in the User-Item Matrix. In Proceedings of the Workshop on Recommendation in Multistakeholder Environments (RMSE) Workshop at RecSys 2019.
http://ceur-ws.org/Vol-2440/short2.pdf

Hope that these comments help with your thesis.

Best regards,
Martha

P. S. As I was about to hit the send button Sarah T. Roberts posted a thread on Twitter. I suggest that you read that, too.
https://twitter.com/ubiquity75/status/1193596692752297984

Sunday, November 10, 2019

The unescapable (im)perfection of data

Serpiente alquimica

In data science, we often work with data collected from people. In the field of recommender system research, this data consist of ratings, likes, clicks, transactions and potentially all sorts of other quantities that we can measure: dwell time on a webpage, or how long someone watches a video. Sometimes we get so caught up in creating our systems, that we forget the underlying truth:

Data is unescapably imperfect.

Let's start to unpack this with a simple example. Think about a step counter. It's tempting to argue that this data is perfect. The step counter counts steps and that seems quite straightforward. However, if you try to use this information to draw conclusions, you run into problems: How accurate is the device? Do the steps reflect a systematic failure to exercise, or did the person just forget to wear the device? Were they just feeling a little bit sick? Are all steps the same? What if the person was walking uphill? Why was the person wearing the step counter? How were they reacting to wearing it? Did they do more steps because they were wearing the counter? How were they reacting to the goal for which the data was to be used? Did they decide to artificially increase the step count (by paying someone else to do steps for them)?

In this simple example, we already see the gaps, and we see the circle: collecting data influences data collection. The collection of data actually creates patterns that would not be there if the data were not being collected. In short, we need more information to interpret the data, and ultimately the data folds back upon itself to create patterns with no basis in reality. It is important to understand that this is not some exotic rare state of data safely ignored in day-to-day practice (like the fourth state of water). Let me continue until you are convinced that you cannot escape the imperfection of data.

Imagine that you have worked very hard and have contolled the gaps in your data, and done everything to prevent feedback loops. You use this new-and-improved data to create a data-based system, and this system makes marvelous predictions. But here's the problem: the minute that people start acting on those predictions the original data becomes out of date. Your original data is no longer consistent with a world in which your data-based system also exists. You are stuck with a sort of Heisenberg's Uncertainty Principle: either you get a short stretch of data that is not useful because it's not enough to be statistically representative of reality, or a longer stretch of data, which is not useful because it encodes the impact of the fact that you are collecting data, and making predictions on the basis of what you have collected.

So basically, data eats its own tail like the Ouroboros (image above). It becomes itself. As science fictiony as that might sound, this issue has practical implications that researchers and developers deal with (or ignore) constantly.  For example, in the area of recommender system research in which I am active, we constantly need to deal with the fact that people are interacting with items on a platform, but the items are being presented to them by a recommender system. There is no reality not influenced by the system.

The other way to see it, is that data is unescapably perfect. Whatever the gaps, whatever the nature of the feedback loops, data faithfully captures them. But if we take this perspective, we no longer have any way to relate data to an underlying reality. Perfection without a point.

And so we are left with unescapable.

Saturday, April 14, 2018

Pixel Privacy: Protecting multimedia from large-scale automatic inference

This post introduces the Pixel Privacy project, and provides related links. This week's Facebook congressional hearings have made us more aware how easily our data can be illicitly acquired and used in ways beyond our control or our knowledge. The discussions around Facebook have been focused on textual and behavior information. However, if we think forward, we should realize that now is the time to also start worrying about the information contained in images and videos. The Pixel Privacy project aims to stay ahead of the curve by highlighting the issues and possible solutions that will make multimedia safer online, before a multimedia privacy issues start to arise.

Pixel Privacy project is motivated by the fact that today's computer vision algorithms have super-human ability to "see" the contents of images and videos using large-scale pixel processing techniques. Many of us our aware that our smartphones are able to organize the images that we take by subject material. However, what most of us do not realize is that the same algorithms can infer sensitive information from our images and videos (such as location) that we ourselves do not see or do not notice. Even more concerning that automatic inference of sensitive information, is large-scale inference. Large scale processing of images and video could make it possible to identify users in particular victim categories (cf. cybercasing [1]).

The aim of the Pixel Privacy project is to jump-start research into technology that alerts users to the information that they might be sharing unwittingly. Such technology would also put tools in the hands of users to modify photos in a way that protects them without ruining them. A unique aspect of Pixel Privacy is that it aims to make privacy natural and even fun for users (building on work in [2]).

The Pixel Privacy project started with a 2 minute video:



The video was accompanied by a 2 page proposal. In the next round, I gave a 30 second pitch followed by rapid fire QA. The result was winning one of the 2017 NWO TTW Open Mind Awards (Dutch).

Related links:
  • The project was written up as "Change Perspective" feature on the website of Radboud University, my home institution: Big multimedia data: Balancing detection with protection (unfortunately, the article was deleted after a year or so).
  • The project also has been written up by Bard van de Weijer for Volkskrant in a piece with the title "Digital Privacy needs to become second nature". (In Dutch: "Digitale privacy moet onze tweede natuur worden")

References:

[1] Gerald Friedland and Robin Sommer. 2010. Cybercasing the Joint: On the Privacy Implications of Geo-tagging. In Proceedings of the 5th USENIX Conference on Hot Topics in Security (HotSec’10). 1–8.

[2] Jaeyoung Choi, Martha Larson, Xinchao Li, Kevin Li, Gerald Friedland, and Alan Hanjalic. 2017. The Geo-Privacy Bonus of Popular Photo Enhancements. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval (ICMR '17). ACM, New York, NY, USA, 84-92.

[3] Ádám Erdélyi, Thomas Winkler and Bernhard Rinner. 2013. Serious Fun: Cartooning for Privacy Protection, In Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.

Monday, January 1, 2018

2018: The year we embrace the information check habit

The new year dawns in the Netherlands. The breakfast conversation was about the Newscheckers site in Leiden and about the ongoing "News or Nonsense" exhibition at the Netherlands Institute for Sound and Vision.

Signs are pointing to 2018 being the year that we embrace the information check habit: without thinking about it do a double check of the trustworthiness of the factuality and the framing of any piece of information that we consume in our daily lives. If the information will influence us, if we will act upon it, we will finally have learned to automatically stop, look, and listen: the same sort of skills that we internalized when we learned to cross the street as youngsters.

For me, 2018 is the year that I make peace with how costly that information quality is. On factuality: I spend hours reviewing papers and checking sources. On framing: I devote a lot of time to looking for resources in which key concepts and processes are explained in ways that my students would easily understand them. And too often I am prevented from working on factuality and framing by worrying about the consequences of missing something or making the wrong choices.

It is costly in terms of time and effort just to choose words. I need words to convey to the students in my information science course that the world is dependent on their skills and their professional standards: anyone whose work involves responsibility for communication must devote time and effort to information quality and must take constant care to inform, rather than manipulate.

What is the name for our era? I don't say "post-truth". A era can call itself "post-truth", but that's asking us to accept that it is fundamentally different than whatever came before---the "pre-post-truth" era. The moment we stop to reflect on how the evidence proves that we have shifted from truth to post-truth, we are engaging in truth seeking. Post-truth goes poof.

I don't say "fake news" era. I grew up with the National Enquirer readily available at the supermarket check out counter, with its bright and interesting pictures of UFOs and celebrity divorces. That content wasn't there to contribute to building my mental model of reality, any more than Pacman. "Fake news" has always been there.

My search for the right words continues. I am using the book Weaponized Lies by Daniel Levitin for the first time this year in order to teach critical thinking skills. Levitin uses words like "counterknowledge" and "misinformation". These are important terms, but they imply the existence of a intelligent adversary intentionally misleading us. It is important to defend against these forces. However, the idea that the problem is people putting effort into "weaponization" overlooks the less dramatic, and less easily identify problem, of reasoning from shaky, half remembered information sources or using flawed logic to build arguments.

Now at the end of the first day of 2018, I am staring at Weaponized Lies next to my keyboard, wishing there were shortcuts---that I didn't have to start from the bottom finding the words to talk about the importance of information quality, even before I start talking about information quality itself, and researching how to build safer more equitable information environments.

There are no shortcuts. The only thing that we can hope for is that we can routinize information check. Make it a habit.

I even stopped for a moment to dream about a rising demand for information quality creating new jobs. We need professionals who are able to help us monitor information without sliding into suppressing free speech and imposing censorship. This is the direction in which our knowledge society should grow.

I thought I remembered reading an article online that discussed 2018 as the "Information Year". Now, for the life of me, I cannot find it. It takes so long to track and keep track of sources. My first step in making peace with the cost of information quality: I end this blog post by admitting I have no proof for my thesis that 2018 is the year we embrace the information check habit. The title is instead an expression of hope that we can move in that direction.

Wednesday, May 24, 2017

Multimedia Meets Machine (Learning): Understanding images vs. Image Understanding

Today, I gave a talk at Radboud University's Good AIfternoon symposium, for Artificial Intelligence students.  I covered several papers that I have written with different subsets of my collaborators [1,2, 3]. The goal was to show students the difference in the way humans understand images, and in the type of understanding the can be achieved by computers applying visual content analysis, particularly concept detection.

Human Understanding of Images
Consider the images below from [1]. The concept detection paradigm claims success if a computer algorithm can identify these images as depicting a woman wearing a turquoise blue sundress with water in the background. For bonus points, in one image the woman is wearing sunglasses.
A person looking at these images would not say that such concept-based description of the images is wrong. In fact, if a person is presented with these pictures out of context, and asked what they depict, "A woman wearing a blue sundress at the beach" would be an unsurprising response. 

However, this response falls short of really characterizing the photos from the perspective of a human viewer. This shortcoming becomes clear by considering contexts of use. For example, if we needed to chose one of the two as a photo for selling a turquoise blue dress in a web shop, the right hand photo is clearly the photo we want. The left-hand photo is clearly unsuited for the job. Concept-based descriptions of these images fail to fully capture user perspectives on images. Upon reflection, a person looking at these images would conclude that the concept-based description is not wrong per se, but that it seriously misses the point of the image.

A often-heard argument is that you need to start somewhere and that concept-based description is a good place to start. However, we need to keep in mind that this starting point represents a build-in limitation on the ability of systems that use automatic image understanding (such as image retrieval systems) to serve users. 

Think of it this way. Indexing images with a preset set of concepts is a bit like those parking garages that paint each floor a different color. If you remember the color, that color is effective at allowing you to find your car. However, the relationship of the color and your car is one of convenience. The parking-garage-floor color is an essential property of your car when you are looking for it in the garage, but outside of the garage, you wouldn't consider it an important property of your car at all.

In short, automatic image understanding underestimates the uniqueness of these images, although this uniqueness is of the essence for a human viewer.

Machine Image Understanding
Consider the images below from  [4]. A human viewer would see these as two different images.
If the geo-location of the right-hand image is known, geo-location estimation algorithms [3] can correctly predict the geo-location of the left-hand image. In this case, a machine learning algorithms "understands" something about an image that is not particularly evident to a casual human viewer. Humans are largely unaware that the geo-location of their images is "obvious" to a computer algorithm that has accessed to other images known to have been taken at the same place.

In short, human understanding of images overestimates the uniqueness of these images, and visual content analysis algorithms understand more than people realize that they do.

Moving forward
Given the current state of the art in visual content analysis, "Multimedia Meets Machine" is perhaps a bit out dated, and we should be thinking in terms of titles like, "Multimedia Has Already Met Machine".

The key question moving forward is whether machine understanding of images supports the people who take and use those images, or if it is providing a little convenience, at the larger cost of personal privacy.


[1] Michael Riegler, Martha Larson, Mathias Lux, and Christoph Kofler. 2014. How 'How' Reflects What's What: Content-based Exploitation of How Users Frame Social Images. In Proceedings of the 22nd ACM international conference on Multimedia (MM '14). 

[2] Martha Larson, Christoph Kofler, and Alan Hanjalic. 2011. Reading between the tags to predict real-world size-class for visually depicted objects in images. In Proceedings of the 19th ACM international conference on Multimedia (MM '11).

[3] Xinchao Li, Alan Hanjalic, Martha Larson.  Geo-distinctive Visual Element Matching  for Location Estimation of Images, Under review. http://arxiv.org/pdf/1601.07884v1.pdf

[4] Jaeyoung Choi, Claudia Hauff, Olivier Van Laere and Bart Thomee. 2015. The Placing Task at MediaEval 2015. In Working Notes Proceedings of the MediaEval 2015 Workshop.


Saturday, April 22, 2017

March for Science: Einsteins at the Lake

A view of the Great Lakes from space

May break at Radboud University (which happens to fall in April this year) sees me arriving in the US, just in time to participate in the March for ScienceMilwaukee, on the shores of Lake Michigan. The weather was gorgeous and the march route was beautiful, taking me past sites familiar from school field trips of my childhood. This blogpost contains photos and some reflections on what the march means. 

Why march for science?

Marching restores the natural balance between listening and reading (I'm at overdose levels these days.) and expressing oneself. The thought expressed is not complicate: it is simply a statement of support for evidence-based policy making. The act of marching also serves to preserve our culture of freedom of expression, of open and informed criticism, and of citizens demanding that their values and interests be represented by their government.


In Dutch, a scientist is a "Wetenschapper", literally, a "Creator of Knowledge". Marching is a concrete and publicly visible sign of the importance of the knowledge created by the scientific method. This knowledge is the bedrock of our well-being as a society. Think: energy, food, health, housing, sanitation, security, transport, and the technology underlying today's digital information creation and exchange. The knowledge that we create by the scientific method is knowledge that we cannot live without.


Restoration is sorely needed in a world delivering a constant information deluge. There's news, but that news includes includes news about news. It is important to keep up, to read, track developments, form a position, and, on the basis of this position, vote. However, without working actively to keep the balance, too much reading becomes bookkeeping of who is on which side, and tallying points, wins or losses, for both.

Relief comes from falling back on common ground, seeking out the non-partisan issues, and focusing on these. We are mechanics, potters, brewers, nurses, birdwatchers, cooks. We drive cars, fly in airplanes, surf the Web, do our laundry, and, upon occasion, fool around with the physics and chemistry around us, e.g., by putting Mentos in Coke. These daily activities all represent science in action.


True to our Wisconsin roots, more than one person at the March for Science carried the sign, "No science, no beer". I thought about the Student's t-test: it might surprise you that beer is actually not that far away from much more science that you might expect.


The common ground is surprisingly sturdy. People, all of us, are constantly applying evidence-based approaches. We don't heat up tomato soup by putting a tin can directly in the microwave, we don't put airtight lids on our fishbowls, we water our plants and maybe even give them plant food, and we try to eat healthily ourselves.

Seen from this perspective of common ground, which we understand to be common sense, we are not experiencing a crisis of denial. Rather, it is perhaps a crisis of connection: putting what we collectively know into action for the benefit of us all. On Monday, 21 August, all of North America will have a special opportunity to watch an eclipse of the sun. No one expects it not to unroll exactly as NASA has announced. Surely, this certainty is something that can be productively built upon.

Relief comes from also falling back on shared values. One that is deeply ingrained in me from my Wisconsin youth is avoidance of waste. Waste of human life is at the top of that list of waste we must seek to avoid. I have taught myself to read Nicholas Kristof's columns on women's health without falling into despair. His latest is on the impact of the funding cuts of the current Republican Administration to women's health programs internationally. I have not seen what Kristof has seen in his travels, but I have seen enough beyond the borders of the US to realize that these cuts translate directly into suffering and death. The science to save lives is there. We are an affluent society: our pride should be that we devote resources to doing just that.

Avoidable waste is also to be observed closer to home. There is broad consensus on the importance of the Great Lakes Restoration Initiative, as discussed by the Chicago Tribune. The Great Lakes Restoration Initiative has the purpose of protecting and restoring the Great Lakes, which face threat from pollution and invasive species. These lakes contain 21% of the fresh water on the surface of the earth, measured by volume. Growing up, I wished they were not quite so deep, since it was cold as cold could be trying to swim in them. Today, the presence of that incomprehensibly large mass of water still remains with me. I feel it in the way that my stomach drops to read about planned funding cuts to an essential program preserving it. Many, many people across party lines have had a similar visceral reaction.

Who does the march's message reach?

If the march is about expressing a message, who receives that message? One goal is that it is received by policy makers: the sheer bio-mass of science-minded citizens on the street is a flashing red light signaling that the course needs to be corrected. More tangibly for me, the march is about reaching young people: people in school who are on the point of deciding for an education in STEM and for a career in science.

At the March for Science, I was enchanted by the many mini-Einsteins. My presence there is a signal to them: "You are clear sighted in your understanding, dear mini-Einsteins. You are right in your resolve. Stay steadfast in your studies and stay true to your vision. There are three thousand of us who turned out here today to show you that you are not alone."