Back

Disclaimer: These are my personal notes on this paper. I am in no way related to this paper. All credits go towards the authors.



Eluding Mass Surveillance: Adversarial Attacks on Facial Recognition Models

Jan. 1, 2018 - Paper Link - Tags: Adversarial, Misclassification, Perturbation

Summary

Uses deep neural networks for facial recognition. Trained the DNN with noisy and obscured landmarks data to compare results to. Noisy data consisted of a salt-and-pepper noise; they randomly chose some pixels to be solid red, green, or blue. Obscured facial landmarks consisted of identifying facial landmarks, such as the nose and mouth. Once these landmarks were identified, noise was added via }Gaussian noise style, noise following a Gaussian distribution. The DNN trained with normal data performed well. When noise was added, it performed measurably worse, and when obscured landmarks noise was added, it performed even worse. Confidence in the DNN's choices shrunk with each attack.

Notes

Analysis

Citation: Milich, Andrew, and Michael Karr. "Eluding Mass Surveillance: Adversarial Attacks on Facial Recognition Models." (2018).