Saturday, April 14, 2018

Pixel Privacy: Protecting multimedia from large-scale automatic inference

This post introduces the Pixel Privacy project, and provides related links. This week's Facebook congressional hearings have made us more aware how easily our data can be illicitly acquired and used in ways beyond our control or our knowledge. The discussions around Facebook have been focused on textual and behavior information. However, if we think forward, we should realize that now is the time to also start worrying about the information contained in images and videos. The Pixel Privacy project aims to stay ahead of the curve by highlighting the issues and possible solutions that will make multimedia safer online, before a multimedia privacy issues start to arise.

Pixel Privacy project is motivated by the fact that today's computer vision algorithms have super-human ability to "see" the contents of images and videos using large-scale pixel processing techniques. Many of us our aware that our smartphones are able to organize the images that we take by subject material. However, what most of us do not realize is that the same algorithms can infer sensitive information from our images and videos (such as location) that we ourselves do not see or do not notice. Even more concerning that automatic inference of sensitive information, is large-scale inference. Large scale processing of images and video could make it possible to identify users in particular victim categories (cf. cybercasing [1]).

The aim of the Pixel Privacy project is to jump-start research into technology that alerts users to the information that they might be sharing unwittingly. Such technology would also put tools in the hands of users to modify photos in a way that protects them without ruining them. A unique aspect of Pixel Privacy is that it aims to make privacy natural and even fun for users (building on work in [2]).

The Pixel Privacy project started with a 2 minute video:



The video was accompanied by a 2 page proposal. In the next round, I gave a 30 second pitch followed by rapid fire QA. The result was winning one of the 2017 NWO TTW Open Mind Awards (Dutch).

Related links:
  • The project was written up as "Change Perspective" feature on the website of Radboud University, my home institution: Big multimedia data: Balancing detection with protection (unfortunately, the article was deleted after a year or so).
  • The project also has been written up by Bard van de Weijer for Volkskrant in a piece with the title "Digital Privacy needs to become second nature". (In Dutch: "Digitale privacy moet onze tweede natuur worden")

References:

[1] Gerald Friedland and Robin Sommer. 2010. Cybercasing the Joint: On the Privacy Implications of Geo-tagging. In Proceedings of the 5th USENIX Conference on Hot Topics in Security (HotSec’10). 1–8.

[2] Jaeyoung Choi, Martha Larson, Xinchao Li, Kevin Li, Gerald Friedland, and Alan Hanjalic. 2017. The Geo-Privacy Bonus of Popular Photo Enhancements. In Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval (ICMR '17). ACM, New York, NY, USA, 84-92.

[3] Ádám Erdélyi, Thomas Winkler and Bernhard Rinner. 2013. Serious Fun: Cartooning for Privacy Protection, In Proceedings of the MediaEval 2013 Multimedia Benchmark Workshop, Barcelona, Spain, October 18-19, 2013.