Back

Disclaimer: These are my personal notes on this paper. I am in no way related to this paper. All credits go towards the authors.



Certified Defenses for Data Poisoning Attacks

June 1, 2017 - Paper Link - Tags: Data-Poisoning, Defense

Summary

Approximate a loss upper bound for data poisoning attacks that have the goal of maximizing loss. Assumes the defense uses outlier removal. Other assumptions are: (1) train and test are correlated and (2) outliers in the clean data do not have a strong effect on the model. GitHub

Notes

Interesting References

Good Quotes

Citation: Steinhardt, Jacob, Pang Wei W. Koh, and Percy S. Liang. "Certified defenses for data poisoning attacks." Advances in neural information processing systems. 2017.