This is the Github repository for adversarial attacks on machine learning models trained on ballot data. At this time we cannot publicly provide the trained models or the dataset.
- Install the packages listed in the Software Installation Section (see below).
- In the file "VoterAdversarialAttacks.py", change line 7 to point to your base directory where all voter models are saved.
- In the file "VoterAdversarialAttacks.py", change line 8 to point to your data directory where your ballot dataset is saved as a ".th" file.
- In the file "VoterAdversarialAttacks.py", change line 13 to run whichever attack and model you want. Lines 10 and 11 of this file list all available attacks ("APGD-Original", "APGD", "PGD", "MIM", "FGSM") and models ("ResNet-20-B","ResNet-20-C","SimpleCNN-B", "SimpleCNN-C", "SVM-B", "SVM-C").
- Run "VoterAdversarialAttacks.py". The output will be printed to the terminal and the corresponding adversarial examples will be saved as both a PyTorch dataloader and ".npy" file.
We use the following software packages:
- pytorch==1.7.1
- torchvision==0.8.2
- numpy==1.19.2
All our attacks are tested in Windows 10 with 12 GB GPU memory (Titan V GPU).
For questions or concerns please contact the author at: kaleel.mahmood@uconn.edu