Digital Masquerade: Privacy in Plain Sight

Protect visual privacy in an increasingly surveilled world using AI-generated privacy-enhancing image filters

Computer vision (CV) technology has revolutionized surveillance capabilities, allowing governments and corporations to monitor at an unprecedented scale. Like the lawsuit on Facebook in 2018 where, its apps were used to gather information about users and their friends, including some who had not signed up to the social network, reading their text messages, tracking their locations and accessing photos on their phones. In response, researchers are exploring the use of adversarial machine learning (AML) to develop privacy-enhancing image filters, aiming to counteract CV-based surveillance systems. 


What is AML and Subversive AI?

AML encompasses a range of techniques designed to create "adversarial examples" - inputs that appear normal to human observers but cause machine learning models to misclassify or make incorrect associations. This property of AML presents an opportunity to enhance image privacy in the face of computer vision-based surveillance.

A new class of privacy-enhancing image filters is emerging, leveraging AML to subtly alter images in ways that aim to confuse automated detection and recognition systems while maintaining visual similarity for human viewers. This application of AML to disrupt institutional surveillance of individuals has been termed "Subversive AI" (SAI) by Sauvik Das in his paper [1]. 

SAI can be viewed as an evolution of Human-centered AI (HAI) principles applied to the field of AML. While HAI emphasizes the design of intelligent systems with a keen awareness of their place within a larger ecosystem of human stakeholders, SAI takes this concept further by specifically focusing on the development of privacy-enhancing tools for individuals. These techniques can be broadly categorized into two types i.e., Evasion Attacks [2] which create inputs that cause trained models to misclassify and Poisoning Attacks [3] which introduce false associations during the model's training process

By combining the human-centric philosophy of HAI with the technical capabilities of AML, SAI aims to empower users with practical tools to safeguard their privacy in an increasingly surveilled digital landscape. This approach bridges the gap between advanced machine learning techniques and real-world privacy concerns, making complex adversarial methods accessible to non-expert users. Fawkes [4] and LowKey [5] are some of the famous filters developed in recent times aiming to surpass these CV-based surveillance systems. Fawkes, one of the earliest SAI projects, was met with much public interest; it was featured in the New York Times and has been downloaded nearly a million times [6]. The figure 1 shows how these image filters look when applied to an example image.

Figure 1 (source): Example of outputs of different filters. (a), (b), (c) Pixelate filter applied in low, medium and high intensity respectively; (d) obfuscation of the face in the photo using an emoji; (e)Fawkes filter, (f) Lowkey filter


SAI is useful, but why don't you see people using it often?

According to the introductory paper of SAI, it pursues three key objectives:

Technical:
Create filters that fool AI but not humans. For example, subtly altering facial features in photos to evade facial recognition while remaining recognizable to friends.

Social :
Empower vulnerable groups online. For instance, particularly those from communities who disproportionately bear the negative effects of algorithmic surveillance (e.g., LGBTQ+ activists [9] and religious minorities [10]).

Ethical:
Balance AI power dynamics. This could involve developing tools that let individuals opt out of data collection by large tech companies, preserving their privacy.

While Subversive AI (SAI) filters have shown promise in their technical ability to confuse commercial computer-vision systems, their evaluation has largely focused on this technical efficacy rather than on the experiences and needs of the end-users they aim to protect [7,8]. Currently, SAI filters often use simplistic metrics like pixel distance to approximate user acceptance, assuming that minimal visual changes correlate with higher user acceptance. However, user acceptance of new technologies is typically more complex and multifaceted, involving factors that these basic metrics may not capture.

The lack of a standardized measure for user acceptance of privacy-enhancing image alterations presents a significant challenge for SAI researchers. Absent a standard measure for user acceptance of privacy-enhancing filter perturbations, SAI researchers face a large cost to answer seemingly simple questions like: “Do people find the perturbations introduced by Fawkes acceptable for images they want to share online?”


Measuring User Acceptance of SAI

My team and I worked on addressing this problem in our paper [11]. We introduce the SAIA-8, a validated psychometric scale measuring user acceptance of privacy-enhancing image perturbations. Our research reveals that users generally find existing subversive AI image filters unappealing and unacceptable, despite interest in anti-surveillance tools. It provides researchers and practitioners with a standardized tool to measure previously elusive factors influencing privacy-enhancing image filter acceptance, potentially driving innovation towards more user-friendly solutions. The key findings are given below:

  1. Filters often produced aesthetically displeasing artifacts

  2. Alterations misaligned with users' preferred identity presentation

  3. Minimal visual changes led to skepticism about privacy protection effectiveness

However, we found potential for more acceptable SAI filters by focusing on aesthetically pleasing and identity-affirming modifications. The SAIA-8 scale, developed through iterative refinement with 232 participants, achieved high internal consistency (𝛼 = 0.87). Our research process, including qualitative exploration and scale validation, also yielded valuable insights for future privacy-enhancing technology design.

Embracing Digital Expression and Surveillance

Our analysis revealed four key factors influencing participants' reactions to and acceptance of SAI filters which are mentioned below.

Aesthetics
Many participants found the filters negatively impacted the image's aesthetic appeal. As one participant noted, "I just don't like what it did to my face". The alterations often created noticeable artifacts or changed color compositions, potentially raising questions from viewers.

Identity Modification
The filters' effects on self-presentation were significant. One participant commented, "They distort the face just enough that it makes you uncomfortable". Interestingly, reactions varied based on personal identity. A transgender woman participant welcomed the changes, finding them affirming to her preferred self-presentation. This highlights how identity concerns can sometimes outweigh privacy considerations in online content sharing.

Shareability
Participants expressed concerns about how the filters affected the shareability of images. One participant suggested, "Probably hiding my face is a better way". The perturbations often compromised the purpose of sharing the image or influenced the expected viewer reaction.

Skepticism of Protection
Many participants doubted the effectiveness of the filters in providing privacy protection. As one participant put it, "If a friend knew me, they would still be able to point that out". This skepticism stemmed from two main factors: recognizability and visible markers of protection. Participants generally preferred more obvious privacy protection methods like pixelation or emojis, as these provided a more intuitive understanding of how their privacy might be protected.

Despite the challenges, our research suggests that designing user-acceptable SAI filters is possible. The key lies in creating image modifications that are both aesthetically pleasing and identity-affirming.


To conclude

Personally, I am a proponent of the usage of these privacy-enhancing AI filters to subvert mass algorithmic surveillance online. The introduction of the SAIA-8 scale represents a significant advancement in the field. It offers researchers and developers a standardized tool to measure previously elusive factors influencing user acceptance of privacy-enhancing image filters. It could guide the development of more user-friendly privacy solutions, bridging the gap between technical effectiveness and user satisfaction. Ultimately, this could lead to the creation of SAI filters that protect privacy and meet users' aesthetic and identity needs, potentially increasing their adoption and effectiveness.


References

  1. Das, S. (2020). Subversive AI: Resisting automated algorithmic surveillance with human-centered adversarial machine learning. In Resistance AI workshop at NeurIPS (Vol. 4).

  2. https://towardsdatascience.com/evasion-attacks-on-machine-learning-or-adversarial-examples-12f2283e06a1

  3. https://www.crowdstrike.com/en-us/cybersecurity-101/cyberattacks/data-poisoning/

  4. Shan, S., Wenger, E., Zhang, J., Li, H., Zheng, H., & Zhao, B. Y. (2020). Fawkes: Protecting privacy against unauthorized deep learning models. In 29th USENIX security symposium (USENIX Security 20) (pp. 1589-1604).

  5. Cherepanova, V., Goldblum, M., Foley, H., Duan, S., Dickerson, J., Taylor, G., & Goldstein, T. (2021). Lowkey: Leveraging adversarial attacks to protect social media users from facial recognition. arXiv preprint arXiv:2101.07922.

  6. Kashmir Hill. 2020. This Tool Could Protect Your Photos From Facial Recognition. The New York Times (Aug. 2020). https://www.nytimes.com/2020/08/03/technology/fawkes-tool-protects-photos-from-facial-recognition.html

  7. Benenson, Z., Lenzini, G., Oliveira, D., Parkin, S., & Uebelacker, S. (2015, September). Maybe poor johnny really cannot encrypt: The case for a complexity theory for usable security. In Proceedings of the 2015 New Security Paradigms Workshop (pp. 85-99).

  8. Das, S., Faklaris, C., Hong, J. I., & Dabbish, L. A. (2022). The security & privacy acceptance framework (spaf). Foundations and Trends® in Privacy

  9. Kyle Knight. 2019. Russia Censors LGBT Online Groups. Human Rights Watch. Retrieved May 8, 2020 from https://www.hrw.org/news/2019/10/08/russiacensors-lgbt-online-groups

  10. Isobel Cockerell. 2019. Inside China’s Massive Surveillance Operation. URL: https://www. wired. com/ story/inside-chinas-massive-surveillance-operation/(25 September 2019).

  11. Logas, J., Garg, P., Arriaga, R. I., & Das, S. (2024). The Subversive AI Acceptance Scale (SAIA-8): A Scale to Measure User Acceptance of AI-Generated, Privacy-Enhancing Image Modifications. Proceedings of the ACM on Human-Computer Interaction, 8(CSCW1), 1-43.