Back

Disclaimer: These are my personal notes on this paper. I am in no way related to this paper. All credits go towards the authors.



Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning

Dec. 15, 2017 - Paper Link - Tags: Backdoor, Data-Poisoning, Physical

Summary

Created an impersonation attack. When a certain filter was applied to a photo (such as a "Hello Kitty" overlay, a pattern, or purple sunglasses), the specified user would be logged in. Two different attacks were presented, each of which required poisoning the training data. One attack involved an input-instance-key attack. This attack poisoned the data with only a few (tens) of images to make a single key image allow a log-in as a specific user. The other attack was a pattern-key attack. This required hundreds to a thousand+ poisoned images in the training set. Each image would have some type of filer applied to it. For example, "Hello Kitty" could be overlaid in the image. To make the pattern-key realistic in a real-life scenario, purple sunglasses and black reading glasses were added to faces so who ever wore the glasses would be able to log-in as the targeted user.

Notes

Interesting References

Analysis

Citation: Chen, Xinyun, et al. "Targeted backdoor attacks on deep learning systems using data poisoning." arXiv preprint arXiv:1712.05526 (2017).