Thanks to visit codestin.com
Credit goes to github.com

Skip to content

An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)

Notifications You must be signed in to change notification settings

moranant/attacking_distributed_learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Little Is Enough: Circumventing Defenses For Distributed Learning

In order to see the parameters for the experiments just use main.py -h We use python 3, and we build upon pytorch.

For backdooring (-b option), you can either use "No" backdooring, "Pattern" backdooring for changing the top-left 5*5 to the max intensity as described in the paper, or an index for the specific index of the image from the dataset to behave as a backdoor sample.

Authors

  • Moran Baruch
  • Gilad Baruch
  • Yoav Goldberg

Citation

If you use this codebase, please cite it as follows:

@article{baruch2019little,
  title={A little is enough: Circumventing defenses for distributed learning},
  author={Baruch, Moran and Baruch, Gilad and Goldberg, Yoav},
  journal={Advances in Neural Information Processing Systems},
  volume={32},
  year={2019}
}

About

An implementation for the paper "A Little Is Enough: Circumventing Defenses For Distributed Learning" (NeurIPS 2019)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages