Back

Disclaimer: These are my personal notes on this paper. I am in no way related to this paper. All credits go towards the authors.



Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization

Nov. 3, 2017 - Paper Link - Tags: Adversarial, Data-Poisoning

Summary

This paper provides a lot of good definitions on data poisoning attacks in section 2. Section 3 outlines a poisoning attack involving back-gradient optimization, while section 4 covers the experimental analysis analyzing spam/malware detection (DNN, Logistic Regression, and Adaline) and handwritten digit recognition (CNN). They show that poisoning attacks crafted for one method can be transferred to other methods, but with much reduced performance.

Notes

Citation: Muñoz-González, Luis, et al. "Towards poisoning of deep learning algorithms with back-gradient optimization." Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security. 2017.