In this repository, code is for our paper EVE: Evil vs Evil: Using Adversarial Examples to Against Backdoor Attack in Federated Learning
Install Python>=3.6.0 and Pytorch>=1.4.0
- MNIST and CIFAR will be automatically download
main.py, update,py, test.py, Fed_aggregation.py: our FL frameworkmain.py: entry point of the command line tooloptions.py: a parser of the FL configs, also assigns possible attackers given the configsbackdoor_data.py: Implement FL backdoor attack
python main.py --dataset mnist --model lenet5 --num_users 20 --epoch 50 --iid False --attack_start 4 --attack_methods CBA --attacker_list 2 5 7 9 --aggregation_methods EVE --detection_size 50 --gpu 0
Check out parser.py for the use of the arguments, most of them are self-explanatory.
- If you choose the backdoor method
--attack_methodsto beDBA, then you will also need to set the number of attackers in--attacker_listto 4 or a multiple of 4. - If you choose the aggregation rule
--aggregation_methodsto beRLR, then you will also need to set the threshold in--robustLR_threshold. - If
--save_resultsis specified, the training results will be saved under./resultsdirectory.