Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The initial focus on the library is on black-box, instance based model explanations.
- Provide high quality reference implementations of black-box ML model explanation algorithms
- Define a consistent API for interpretable ML methods
- Support multiple use cases (e.g. tabular, text and image data classification, regression)
- Implement the latest model explanation, concept drift, algorithmic bias detection and other ML model monitoring and interpretation methods
Alibi can be installed from PyPI:
pip install alibiThis will install alibi with all its dependencies:
beautifulsoup4
numpy
Pillow
pandas
requests
scikit-learn
spacy
scikit-image
tensorflowTo run all the example notebooks, you may additionally run pip install alibi[examples] which will
install the following:
seaborn
KerasAnchor method applied to the InceptionV3 model trained on ImageNet:
| Prediction: Persian Cat | Anchor explanation |
|---|---|
Contrastive Explanation method applied to a CNN trained on MNIST:
| Prediction: 4 | Pertinent Negative: 9 | Pertinent Positive: 4 |
|---|---|---|
Trust scores applied to a softmax classifier trained on MNIST: