DRC
Home
Experience
Toggle Dropdown
• Resume
• Work Experience
Research
Toggle Dropdown
• Papers
• Projects
• Notes
Learn
Projects
Write-Ups
About
Contact
Paper Notes
Paper Count: 44
All
Adversarial
Backdoor
Black-Box
CNN
Data-Poisoning
Dataset
Deepfake
Defense
Detection
Facial-Reenactment
Framework
Label-Flipping
Misclassification
Model-Extraction
Perturbation
Physical
RNN
SVM
Survey
FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping
(25) 2020-09 Tags: Dataset, Deepfake
ADVBOX: A TOOLBOX TO GENERATE ADVERSARIAL EXAMPLES THAT FOOL NEURAL NETWORKS
(6) 2020-08 Tags: Adversarial, Framework, Perturbation
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
(1) 2020-07 Tags: Adversarial, Data-Poisoning, Perturbation
DeepFakes Evolution: Analysis of Facial Regions and Fake Detection Performance
(1) 2020-07 Tags: Deepfake, Detection
DeepFakes and Beyond: A Survey of Face Manipulation and Fake Detection
(25) 2020-06 Tags: Deepfake, Detection, Survey
DeepFaceLab: A simple, flexible and extensible face swapping framework
(2) 2020-05 Tags: Deepfake, Framework
Adversarial Perturbations Fool Deepfake Detectors
(2) 2020-05 Tags: Deepfake, Detection, Perturbation
The Creation and Detection of Deepfakes: A Survey
(1) 2020-05 Tags: Deepfake, Detection, Survey
Face X-ray for More General Face Forgery Detection
(19) 2020-04 Tags: Deepfake, Detection
CNN-generated images are surprisingly easy to spot... for now
(24) 2020-04 Tags: CNN, Dataset, Detection
Evading Deepfake-Image Detectors with White- and Black-Box Attacks
(1) 2020-04 Tags: Adversarial, Black-Box, Deepfake, Detection, Perturbation
Adversarial Deepfakes: Evaluating Vulnerability of Deepfake Detectors to Adversarial Examples
(2) 2020-03 Tags: Adversarial, Deepfake, Detection, Perturbation
Unmasking DeepFakes with simple Features
(9) 2020-03 Tags: Dataset, Deepfake, Detection
Exploring Connections Between Active Learning and Model Extraction
(10) 2019-11 Tags: Model-Extraction
Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks
(123) 2019-09 Tags: Backdoor, Detection
FaceForensics++: Learning to Detect Manipulated Facial Images
(138) 2019-08 Tags: Dataset, Deepfake, Detection, Facial-Reenactment, Survey
FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals
(16) 2019-08 Tags: Deepfake, Detection
Exposing DeepFake Videos By Detecting Face Warping Artifacts
(86) 2019-05 Tags: Deepfake, Detection
Recurrent Convolutional Strategies for Face Manipulation Detection in Videos
(36) 2019-05 Tags: Deepfake, Detection, RNN
Deepfake Video Detection Using Recurrent Neural Networks
(128) 2018-11 Tags: Deepfake, Detection, RNN
Exposing Deep Fakes Using Inconsistent Head Poses
(76) 2018-11 Tags: Deepfake, Detection, SVM
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
(145) 2018-11 Tags: Adversarial, Data-Poisoning, Perturbation
CAPSULE-FORENSICS: USING CAPSULE NETWORKS TO DETECT FORGED IMAGES AND VIDEOS
(44) 2018-10 Tags: CNN, Deepfake, Detection
Label Sanitization against Label Flipping Poisoning Attacks
(39) 2018-10 Tags: Data-Poisoning, Defense, Label-Flipping
Fast Geometrically-Perturbed Adversarial Faces
(21) 2018-09 Tags: Misclassification, Perturbation
MesoNet: a Compact Facial Video Forgery Detection Network
(146) 2018-09 Tags: Deepfake, Detection
Detection of Deepfake Video Manipulation
(12) 2018-08 Tags: Deepfake, Detection
Adversarial Attacks on Face Detectors using Neural Net based Constrained Optimization
(37) 2018-05 Tags: Adversarial, Defense, Perturbation
Robust Physical-World Attacks on Deep Learning Visual Classification
(503) 2018-04 Tags: Adversarial, Misclassification, Physical
Unravelling Robustness of Deep Learning Based Face Recognition against Adversarial Attacks
(67) 2018-02 Tags: Adversarial, Detection, Framework
Eluding Mass Surveillance: Adversarial Attacks on Facial Recognition Models
(1) 2018-01 Tags: Adversarial, Misclassification, Perturbation
Yet Another Text Captcha Solver: A Generative Adversarial Network Based Approach
(42) 2018-01 Tags: Adversarial
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
(212) 2017-12 Tags: Backdoor, Data-Poisoning, Physical
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
(207) 2017-11 Tags: Adversarial, Data-Poisoning
MagNet: A Two-Pronged Defense against Adversarial Examples
(463) 2017-09 Tags: Adversarial, Detection
Efficient Decision-based Black-box Adversarial Attacks on Face Recognition
(44) 2017-06 Tags: Black-Box, Misclassification, Perturbation
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
(36) 2017-06 Tags:
Certified Defenses for Data Poisoning Attacks
(191) 2017-06 Tags: Data-Poisoning, Defense
Xception: Deep Learning with Depthwise Separable Convolutions
(3008) 2017-04 Tags: CNN
Universal adversarial perturbations
(891) 2017-03 Tags: Adversarial, Misclassification, Perturbation
Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition
(624) 2016-10 Tags: Adversarial, Misclassification, Perturbation, Physical
Face2Face: Real-time Face Capture and Reenactment of RGB Videos
(744) 2016-09 Tags: Facial-Reenactment
The Limitations of Deep Learning in Adversarial Settings
(1674) 2015-11 Tags:
FaceNet: A Unified Embedding for Face Recognition and Clustering
(5819) 2015-03 Tags: Framework