Thanks to visit codestin.com
Credit goes to github.com

Skip to content
/ MLSec Public
forked from icmpnorequest/MLSec

It's a repository to implement some experiments on Machine Learning Security

License

Notifications You must be signed in to change notification settings

vickyqi7/MLSec

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MLSec

It's a repository to implement some experiments on Machine Learning Security

Code Description

1. Data_Poisoning Directory

  • label_poisoning_attack_svm.py

It aims to implement a data poisoning attack on labels with SVM classifier.

2. Membership_Inference Directory

The code under this directory is implementation of 17 S&P paper "Membership Inference Attacks Against Machine Learning Models" by Shokri et al.

  • data_synthesis.py

It aims to implement Algorithm 1: Data Synthesis in paper.

  • shadow_models.py

It aims to implement Shadow model technique in paper.

  • attack_models.py

It aims to implement Membership Inference Attack according to paper.

Limitation

  • I have only implemented Membership Inference Attack with classifier RandomForest in Python3 package sklearn. In the future, I will add various classifiers, like Neural Network and so on.

  • In my code, I just implement the experiments. However, the code should be wrapped in a class for reusing.



Reference

1. Paper

  • [17 S&P] Membership Inference Attacks Against Machine Learning Models

  • [15 IJSN Attribute Infenrence Attack] Hacking Smart Machines with Smarter Ones How to Extract Meaningful Data from Machine Learning Classifiers

2. Code

Membership Inference Attack:

About

It's a repository to implement some experiments on Machine Learning Security

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%