Back
Disclaimer: These are my personal notes on this paper. I am in no way related to this paper. All credits go towards the authors.
Eluding Mass Surveillance: Adversarial Attacks on Facial Recognition Models
Jan. 1, 2018 -
Paper Link -
Tags: Adversarial, Misclassification, Perturbation
Summary
Uses deep neural networks for facial recognition. Trained the DNN with noisy and obscured landmarks data to compare results to. Noisy data consisted of a salt-and-pepper noise; they randomly chose some pixels to be solid red, green, or blue. Obscured facial landmarks consisted of identifying facial landmarks, such as the nose and mouth. Once these landmarks were identified, noise was added via
}Gaussian noise style, noise following a Gaussian distribution. The DNN trained with normal data performed well. When noise was added, it performed measurably worse, and when obscured landmarks noise was added, it performed even worse. Confidence in the DNN's choices shrunk with each attack.
Notes
- Code
- Used a modified model from hackernoon for facial recognition. Required cropping, rotation, and alignment pre-processing.
- Facial recognition DNN generated a 128-dimensional embedding for each face that was feed into a SVM classifier for face classification.
Analysis
- They required a separate DNN to find facial landmarks. A kaggle dataset was used for the facial landmarks while the LFW dataset was used for the facial recognition DNN. Using two different datasets may have decreased accuracy.
- Used a pre-trained network, then applied transfer learning for their LFW samples. This might have negatively affected results.
Citation: Milich, Andrew, and Michael Karr. "Eluding Mass Surveillance: Adversarial Attacks on Facial Recognition Models." (2018).