Friday, October 2, 2020

Why should recommender system researchers care about platform policy?

In this post, I reflect on why recommender system researchers should care about platform policy. These reflections are based on a talk I gave last week at the Workshop on Online Misinformation- and Harm-Aware Recommender Systems (OHARS 2020) at ACM RecSys 2020, which was entitled, "Moderation Meets Recommendation: Perspectives on the Role of Policies in Harm-Aware Recommender Ecosystems."

Every online platform has a policy that specifies what is and what is not allowed on the platform. Platform users are informed of the policy via platform guidelines. All major platforms have guidelines, e.g., Facebook Community Standards, Twitter Rules and Policies, Instagram Community Guidelines. Amazon's guidelines are sprawling and a bit more difficult to locate, but can be found at pages like Amazon Product Guidelines and Amazon restricted products.

Policy is important because it is the language in which the platform and users communicate about what constitutes harm and needs to be kept off the platform. Communicating via policy, which is expressed in everyday language, ensures that everyone can contribute to the discussion of what is and is not appropriate. Communication via technical language or computer code would exclude people from the discussion. The language of policy is what offers the possibility (which should be used more often) for us to reach consensus on what is appropriate. It also acts as a measuring stick to make specific judgements in specific cases, which is necessary in order to enforce that consensus completely and consistently.

Policy is closer to recommender system research that we realize

On the front lines of enforcing platform policy are platform moderators. Moderation is human adjudication of content on the basis of policy. Moderators keep inappropriate content off the platform. (Read more about moderation in Sarah T. Roberts' Behind the Screen and Tarleton Gillespie's Custodians of the Internet.)

Historically, there has been a separation between moderators and the online platforms that they patrol. Moderators are often contractors, rather than regular employees. It is easy to develop the habit of placing both responsibility for policy enforcement and the blame for enforcement failure outside of the platform (which would also make it distant to the recommender algorithms). An example of such distancing occurred this summer, when Facebook failed to remove a post that encouraged people with guns to come to Kenosha in the wake of the shooting of Jacob Blake. The Washington Post reported that Zuckerberg said: "The contractors, the reviewers who the initial complaints were funneled to, didn’t, basically, didn’t pick this up." He refers to "the contractors", implicitly holding moderators at arm's length from Facebook. It is important that we as recommender system researchers resist absorbing this historic separation between "them" and "us".

Recommender system researchers, as computer scientists, live by the wisdom of GIGO (Garbage In Garbage Out). In order to produce harm-free lists of recommended items, we need an underlying item collection that does not contain harmful items. This is achieved via policy, and the help of moderators enforcing policy.

Second, recommender systems are systems. Recommender system research understands them as not only as systems, but as ecosystems, encompassing both human and machine components. When we think of the human component of recommender systems we generally think of users. However, moderators are also a part of the larger ecosystems, and we should include them and their important work in our research.

Connecting recommendation and moderation opens new directions for research

Currently, most of the interest in moderation has been around how to combine human judgement and machine learning in order to quickly, and at large scale, decide what needs to be removed from the platform. At the end of the talk at the workshop, I introduced a case study of a system that can translate the nuanced judgments of moderators into automatic classifiers. I discussed the potential of these classifiers for helping platforms to keep up with the fast change of content and quickly evolving policy. The work has not yet been published, but is current still under preparation (hope to be able to add a reference here at some later point).

However, not all policy enforcement involves removal. Some examples of how platform policy interacts with ranking are mentioned in the recent Wired article YouTube's Plot to Silence Conspiracy Theories. It is worth noting, that even if downranking can be largely automated it is important to keep human eyes in the loop to ensure that the algorithms are having their intended effects. We should strive to understand how this collaboration can be designed to be most effective.

Finally, I will mention that together with Manel Slokom, I have previous proposed the concept of hypotargeting for recommender systems (hyporec), a recommender system algorithm that produces a constrained number of recommended lists (or groups, sets, sequences). Such an algorithm would make it easier to enforce platform policy not only for individual items, but also for associations between items (which are created when the recommender produce a group, list or stream of recommendations).

In order to understand the argument for hypotargeting consider the following observation: There is a difference between a situation in which I view one conspiracy book online as an individual book, and a situation in which I view one book online and am immediately offered a discount to purchase of set of three books promoting the same conspiracy.

The difference lies in the impact that the recommender has on the user. Associations of items can be easily interpreted as "a trail of crumbs" leading the user to assume more broader supporting evidence for an idea than is actually justified. If the recommender produced a constrained number of sets, it would be easier to review them manually, and to make the subtle judgement of whether it is appropriate to be incentivizing purchase of these items.

Ultimately these ideas open new possibilities for policy as well: the e-commerce site should be transparent not only about which items they remove, but also about the items they prevent from occurring together in lists, groups, or streams.

There are no silver-bullet solutions to the problem of harm caused by recommender systems. However, it does seem like there is a great deal of potential in researching algorithms that can be steered by humans in order to enforce policy.



Monday, August 10, 2020

Three Laws of Robotic Language

This post is a draft on which I am currently eliciting feedback. Changes may be made in the future.

Artificial Intelligence that can produce language is improving in leaps and bounds (cf. the recent GPT-3 as reported on, e.g., in The Economist). However, it is still early enough to think seriously about how we should guide the development of language AI in order to maintain influence over the large-scale, long-term effects of automatic language generation. Asimov’s Three Laws of Robotics have inspired AI research towards conscious design choices during the early stages of new AI technologies. Parallel to these laws, this post proposes Three Laws of Robotic Language. We understand robotic language as language (written, spoken, or signed) that was generated partially or entirely by an automatic system. Because such a system can be seen as a machine engaging in a conventionally human activity, we refer to it as a language robot. These laws are intended to support researchers developing AI for natural language generation. The laws are formulated to help lay a solid foundation for what is to come by inspiring careful reflection about what we need to get right from the beginning, and the mistakes we need to avoid.

The Three Laws of Robotic Language

First Law: A language robot must declare its identity.
Second Law: A language robot’s identity must be easy to verify.
Third Law: A language robot’s identity must be difficult to counterfeit.

Practical benefits

Adopting these Three Laws would support desirable practical properties of robotic language as its use becomes more widespread:

  • People (readers, consumers) will be able to identify content as robotic language (as opposed to language produced by other people) without relying on sophisticated technology.
  • People will be able to confirm the source of the content without relying on sophisticated technology.
  • Entities (organizations, companies) that generate high-quality, reliable robotic language can be sure that consumers can recognize and trust their content. 
  • Entities that generate robotic language can more easily ensure that they don’t unwittingly train their language generation systems on previously generated robotic language.
Like the Three Laws of Robotics, these laws depend on adoption by the people and organizations that develop and control technology. For many, the practical properties delivered by the laws will be convincing enough. For others, it will be important to understand the link between these laws and the nature of human language, which is explained next.

Moving robotic language towards human language

Currently, the success of robotic language is judged by its ability to fool a reader into mistaking it for language generated by a human. This criterion seems sensible for judging individual sentences, paragraphs or documents. Adopting this criterion implies that we, effectively, regard human language as the generation and exchange of sequences of words and that we consider the aim of language robots to be approximating these sequences. However, if we look at the larger picture of how people actually use language, we see that language goes beyond word sequences. What interests us here is how language conveys the connection between the creator (i.e., who is speaking or writing) and the language content that they create (i.e., what is spoken or written). The Three Laws of Robotic Languages state that when language robots generate language content, information about the creator must be inextricable from that content. Adding the criterion of creator-content inextricability should not be considered a nice-to-have  functionality that can optionally be added to language robots at some future point. Rather, this feature must be planned from the beginning, before language robots establish themselves as a major source of the language content that we consume.

For some, the idea that the connection between creator and content is an important part of language is surprising. It is not, however, radically new, but rather an observation, perhaps so obvious that it is easily overlooked. Think about speaking to a baby or an animal: they react to the you-ness of your voice, although they might not understand your words. Our voices identify us as individuals. On top of that, when we hear a voice we may not recognize the specific person speaking, but we still hear something about them. Speech is produced by the human body, and is given its form by our mouths and nasal cavities. Our voices identify something about us, e.g., how big we might be. The Three Laws of Robotic Language are, at their root, a proposal to give language robots a “sound of voice” that would carry information about the origin of the language content that they produce. Language robots must identify themselves, or at least reveal enough about themselves so that it is clear (without the need for sophisticated technology) that they are robots.

In order to better grasp why the inextricability of creator from content is a fundamental characteristic of human language, it helps to look back in time. Throughout most of the history of language, speech could not exist independently of a speaker (and sign language could not exist independently of a signer). It was impossible to decouple the words and the source of the words. It is only with the rise of written language that we have the option of breaking the content-creator association, allowing language content to float free of the person who produced it. Most recently, speech synthesis or sign synthesis can also disassociate the speaker from what is spoken. This possibility of content-without-creator now feels so natural to us that it is hard to imagine that it was not originally a property of human language. However, the age of speech-only language was tens of thousands of years (possibly more) longer than the current era of written language. It may seem strange from the perspective of today, but the original state of human language is one of inextricability: speech could not exist without a speaker. 
 
In short, we know that language works well with inextricability: that’s the way in which human language was originally developed and used. For this reason, the Three Laws of Robotic Language should not be considered an unnatural imposition, but rather a gentle requirement that language robots behave in a way that is closer to the original state of language.

An important design choice

It is important to note that when humans use language they creatively manipulate the connection between who is creating and what is created. We imitate others’ voices. We quote other people. We love the places where we can yell and hear our voices echoed back. Once written language introduced the possibility of extricating the creator from language content, we started to take advantage of the option of hiding our identities: we use pen names and we write anonymous messages. The Three Laws of Robotic Languages constrain the ability of language robots to engage in these kinds of activities. For example, the laws prevent them from generating anonymous content or producing imitations that are impossible to detect.

At first consideration, it seems that the Three Laws of Robotic Language represent an unnecessary hindrance or constraint. However, human language is characterized by strong constraints. On further thought, it becomes clear that language robots need to be subject to some form of constraint if they are to interact productively with naturally constrained humans over the long run in large-scale information spaces.
 
The constraints on human language are human mortality and limited physical strength. When we focus on a small, local scale, thinking about individual texts and short periods of time, we risk overlooking these constraints. However, they are there and their effect is important.

First, think about human mortality: A given person can produce and consume only so many words in their lifetime. Our deaths represent a hard limit, and force us to choose, over the course of our existence, what we say and what we don’t say, what we listen to and read, and what we don’t. A language robot needs shockingly little time to generate the same amount of language content that a human would produce (or could consume) in a lifetime.

Second, think about human physical strength. Language is the means by which humans as a species have pooled their physical strength. Language allows us to engage in coordinate action towards a common goal. We use language to convince other people to adopt our opinions or follow our plans. The power of our language to convince is limited by our physical ability to act consistently with our opinions or to contribute to carrying out our plans. People speaking empty words put themselves at risk of ostracization or physical harm. A language robot can generate language that is finely tuned to be convincing, and is unconstrained by the need to follow up words with action. Language robots risk nothing.

Considering again Asimov’s Three Laws of Robotics, human mortality and limited physical strength is what makes the laws necessary in the face of robots with superior strength and stamina. The laws level the playing field, so to say. The Three Laws of Robotic Language serve a similar function. They do not protect humans as directly as Asimov’s laws. However, they make the actions of language robots traceable, which provides a lever that allows humans to maintain influence on the large-scale, long-term impact of robotic language on our information sphere.

At this point, we don't know enough to predict this influence exactly. What is clear, however, is that we need some kind of constraint. It is also clear, as argued above, that the Three Laws of Robotic Language are consistent with a functioning form of human language, which is actually its original form. Further, we know that the laws have some already-obvious advantages. Recall from above the desirable practical properties: inextricability delivers convenience i.e., following the Three Laws of Robotic Language will prevent AI researchers from inadvertently training language robots on automatically generated text, causing feedback loops (resulting, possibly, in systems drifting away from human interpretable syntax and semantics). Further, as we struggle to gain control of malicious bots and disinformation online, it would be helpful if the language robots with honorable intent would declare themselves. Inextricablity would make it easier to build a case against ill-intentioned actors.
 
The Three Laws of Robotic Language are not a silver-bullet solution, but rather a well-informed design choice. Currently, AI researchers have defaulted to the extricability of creator from content. The Three Laws will already be a success if they inspire AI researchers to pause and consider whether inextricablity, rather than extricability should be considered the default choice for systems that automatically generate natural language (text, speech and sign).
 

An example

Let’s consider a language robot that generates text sentences. We will call this language robot  DP-bot, because it declares its identity by upholding the double prime (DP) rule with every sentence that it produces. The language robot can generate the sentence:

           We adore language.

The double prime rule states that a prime number of letters must occur a prime number of times in a sentence. The rule is upheld by this sentence since ‘e’,’a’,’g’ (3 and only 3 letters; 3 being a prime number) each occur in the sentence a prime number of times (3, 3, and 2 times respectively; 2 and 3 being prime numbers).

This sentence expresses the same sentiment:

        We love language.

The sentence, however, does not respect the double prime rule. ‘e’,’a’,’g’ all occur a prime number of times (3, 2, and 2 times respectively), but ‘l’ also occurs 2 times. This means that 4 letters occur a prime number of times (4 not being a prime number).

At first consideration, it may seem that DP-bot is a bit too constrained in the semantics that it can express, since the match in meaning between the two sentences is approximate. However, if sentences get longer, or if the rule is defined to apply at a higher level (e.g., paragraph and not the sentence level), it will be easier to encode semantics into a text that respects the double prime rule without burdensome constraints.

DP-bot upholds the First Law of Robotic Language in that all language content generated by DP-bot respects the double prime rule and is thus identifiable as having been generated by DP-bot. DP-bot upholds the Second Law because it is easy to validate that a sentence respects the double prime rule. The only knowledge that is needed for validation is the natural language sentence that states the double prime rule, i.e., “a prime number of letters must occur a prime number of times in a sentence”. DP-bot does not do very well with the Third Law, since it is easy to create a sentence that respects the double prime rule, thereby counterfeiting DP-bot language. Even manually constructing a sentence that complies to the double prime rule is not difficult. Currently, we are working on formulating rules that are more sophisticated than the double prime rule and that require a large amount of computational power or specialized training data in order to embed them into natural language sentences.

Note that the language robot DP-bot produces text that encodes a mark, but that this mark is not a watermark. Let’s call it an sourcemark, since it marks a language robot as having been the source of the text. A watermark is also a pattern that is embedded into content, like text or an image. Its purpose is to identify ownership. A watermark is designed to be robust to change. For example, if a text is paraphrased or excerpted the mark should still remain. An sourcemark, however, is meant to identify the original text and associate it with a creator (the source). A small change in text might compromise the meaning, e.g., We do not adore language. A creator can no longer claim responsibility for text once it has changed, and should not be identified with the changed text. Unlike a watermark, a sourcemark must disappear when the text has been changed.

Note that the double-prime rule has nothing to do with encryption. Prime numbers are used because they are a relatively small set of numbers that are easy to describe. If the rule can be expressed in a single sentence, “a prime number of letters must occur a prime number of times in a sentence”, then it is easy to confirm the rule without any sophisticated technology, such as a machine learning classifier or a key (with enough patience it can be done without even using a computer). If we used a form of encryption, the ability to verify the identity of a language robot would be restricted to the subset of people who have the appropriate technology (requiring software installation and maintenance, computation, passing of keys).

Following the Three Laws of Robotics means designing language robots that embed sourcemarks in all the content that they generate. Here, we have presented a simple (and not yet completely successful) example of a sourcemark. We expect that any number of sourcemarks could be developed. An interesting overall property is that even if we do not have knowledge of the presence of a sourcemark, carrying out some simple statistics could reveal the difference between marked and unmarked language content. This signal would reflect a “suspected” language robot, and trigger deeper investigation. As further sourcemarks are developed, desirable properties of marks going beyond the Three Laws of Robot Language can be innovated.