Monday, November 11, 2019

Reflections on Discrimination by Data-based Systems

A student wrote to me to ask me to interview me about discrimination in text mining and classification systems. He is working on his bachelor thesis, and plans to concentrate on gender discrimination. I wrote him back with an informal entry into the topic, and posted it here, since it may be of more general interest.

Dear Student,

Discrimination in IR, classification, or text mining systems is caused by the mismatch between what is assumed to be represented by data and what is helpful, healthy and fair for people and society.

Why do we have this mismatch and why is it so hard to fix?

Data is never a perfect snapshot of a person or a person's life. There is no single "correct" interpretation inherent in data. Worse, data creates its own reality. Let's break it down.

Data keeps us stuck in the past. Data-based systems make the assumption that predictions made for use in the future, can be meaningfully based on what has happened in the past. With physical science, we don't mind being stuck in the past. A ballistic trajectory or a chemical reaction can indeed be predicted by historical data. With data science, when we build systems based on data collected from people, shaking off the past is a problem. Past discrimination perpetuates itself, since it gets built into predictions for the future. Skew in how datapoints are collected also gets built into predictions. Those predictions in turn get encoded into the data and the cycle continues.

In short, the expression "it's not rocket science" takes on a whole new interpretation. Data science really is not rocket science, and we should stop expecting it to resemble physical science in its predictive power.

Inequity is exacerbated by information echo chambers. In information environments, we have what is known as rich gets richer effects, i.e., videos with many views gain more views. It means that small initial tendencies are reinforced. Again, the data creates its own reality. There is a difference between data collected in online environments and data collected via a formal poll.

Other important issues:

"Proxy" discrimination: for example, when families move they tend to follow the employment opportunities of the father and not the mother. The trend can be related to the father often earning more because he tends to be just a bit older (more work experience) and also tends to have spent less time on pregnancy and kid care. This means that the mother's CV will be full of non-progressive job changes (i.e., gaps or changes that didn't represent career advancement), and gets down ranked by a job candidate ranking function. The job ranking function generalizes across the board over non-progressive CVs, and does not differentiate between the reasons that the person was not getting promoted. In this case, this non-progressiveness is a proxy for gender, and down-ranking candidates with non-progressive CVs leads to reinforcing gender inequity. Proxy discrimination means that it is not possible to address discrimination by looking at explicit information; implicit information also matters.

Binary gender: When you design a database (or database schema) you need to declare the variable type in advance, and you also want to make database interoperable with other databases. Gender is represented as a binary variable. The notion that gender is binary gets propagated through systems regardless of the ways that people actually map well to two gender classes. I notice a tendency among researchers to assume that gender is some how a super-important variable contributing to their predictions just because it seems easy to collect and encode. We give importance to the data we have, and forget about other, perhaps more relevant data, that are not in our database.

Everyone's impacted: We tend to focus on women when we talk about gender inequity. This is because of the examples of gender inequity that threaten life and limb tend to involve women, such as gender gaps in medical research. Clearly action needs to be taken. However, it is important to remember that everyone is impacted by gender inequity. When a lopsided team designs a product, we should not be surprised when the product itself is also lopsided. As men get more involved in caretaking roles in society, they struggle against pressure to become "Supermom", i.e., fulfill all the stereotypical male roles, and at the same time excel at the female roles. We should be careful while we are fixing one problem, not to fully ignore, or even create, another.

I have put a copy of the book Weapons of Math Destruction in my mailbox for you. You might have read it already, but if not, it is essential reading for your thesis.

From the recommender system community in which I work, check out:

Michael D. Ekstrand, Mucun Tian, Mohammed R. Imran Kazi, Hoda Mehrpouyan, and Daniel Kluver. 2018. Exploring author gender in book rating and recommendation. In Proceedings of the 12th ACM Conference on Recommender Systems (RecSys '18). ACM, New York, NY, USA, 242-250.

and also our own recent work, that has made be question the importance of gender for recommendation. 

Christopher Strucks, Manel Slokom, and Martha Larson, BlurM(or)e: Revisiting Gender Obfuscation in the User-Item Matrix. In Proceedings of the Workshop on Recommendation in Multistakeholder Environments (RMSE) Workshop at RecSys 2019.
http://ceur-ws.org/Vol-2440/short2.pdf

Hope that these comments help with your thesis.

Best regards,
Martha

P. S. As I was about to hit the send button Sarah T. Roberts posted a thread on Twitter. I suggest that you read that, too.
https://twitter.com/ubiquity75/status/1193596692752297984

Sunday, November 10, 2019

The unescapable (im)perfection of data

Serpiente alquimica

In data science, we often work with data collected from people. In the field of recommender system research, this data consist of ratings, likes, clicks, transactions and potentially all sorts of other quantities that we can measure: dwell time on a webpage, or how long someone watches a video. Sometimes we get so caught up in creating our systems, that we forget the underlying truth:

Data is unescapably imperfect.

Let's start to unpack this with a simple example. Think about a step counter. It's tempting to argue that this data is perfect. The step counter counts steps and that seems quite straightforward. However, if you try to use this information to draw conclusions, you run into problems: How accurate is the device? Do the steps reflect a systematic failure to exercise, or did the person just forget to wear the device? Were they just feeling a little bit sick? Are all steps the same? What if the person was walking uphill? Why was the person wearing the step counter? How were they reacting to wearing it? Did they do more steps because they were wearing the counter? How were they reacting to the goal for which the data was to be used? Did they decide to artificially increase the step count (by paying someone else to do steps for them)?

In this simple example, we already see the gaps, and we see the circle: collecting data influences data collection. The collection of data actually creates patterns that would not be there if the data were not being collected. In short, we need more information to interpret the data, and ultimately the data folds back upon itself to create patterns with no basis in reality. It is important to understand that this is not some exotic rare state of data safely ignored in day-to-day practice (like the fourth state of water). Let me continue until you are convinced that you cannot escape the imperfection of data.

Imagine that you have worked very hard and have contolled the gaps in your data, and done everything to prevent feedback loops. You use this new-and-improved data to create a data-based system, and this system makes marvelous predictions. But here's the problem: the minute that people start acting on those predictions the original data becomes out of date. Your original data is no longer consistent with a world in which your data-based system also exists. You are stuck with a sort of Heisenberg's Uncertainty Principle: either you get a short stretch of data that is not useful because it's not enough to be statistically representative of reality, or a longer stretch of data, which is not useful because it encodes the impact of the fact that you are collecting data, and making predictions on the basis of what you have collected.

So basically, data eats its own tail like the Ouroboros (image above). It becomes itself. As science fictiony as that might sound, this issue has practical implications that researchers and developers deal with (or ignore) constantly.  For example, in the area of recommender system research in which I am active, we constantly need to deal with the fact that people are interacting with items on a platform, but the items are being presented to them by a recommender system. There is no reality not influenced by the system.

The other way to see it, is that data is unescapably perfect. Whatever the gaps, whatever the nature of the feedback loops, data faithfully captures them. But if we take this perspective, we no longer have any way to relate data to an underlying reality. Perfection without a point.

And so we are left with unescapable.