Back

Disclaimer: These are my personal notes on this paper. I am in no way related to this paper. All credits go towards the authors.



Fawkes: Protecting Privacy against Unauthorized Deep Learning Models

July 23, 2020 - Paper Link - Tags: Adversarial, Data-Poisoning, Perturbation

Summary

Created a novel targeted clean data poisoning attack using perturbation ("cloaking") to move User samples towards a Target class. Their system consists of a User, who does not want to recognized by facial recognition software and a Target. The Target helps the User not be recognized. In feature space, the Target should be far away from the User. Their framework then adds a perturbation to User photos (a different perturbation for each photo) in the training dataset to move the decision boundary for the User closer to the Target. When live, un-perturbed, User samples are then feed into the facial recognition system, the User will be classified as someone else, since their feature vector is not similar to the Target.

Notes

Interesting References

Analysis

Ideas

Citation: Shan, Shawn, et al. "Fawkes: Protecting Privacy against Unauthorized Deep Learning Models." 29th {USENIX} Security Symposium ({USENIX} Security 20). 2020.