A python library for decision tree visualization and model interpretation.
-
Updated
Mar 6, 2025 - Jupyter Notebook
A python library for decision tree visualization and model interpretation.
InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.
Overview of different model interpretability libraries.
A set of tools for leveraging pre-trained embeddings, active learning and model explainability for effecient document classification
FastAI Model Interpretation with LIME
What Has Been Enhanced in my Knowledge-Enhanced Language Model?
Official implementation of "HyPepTox-Fuse: An interpretable hybrid framework for accurate peptide toxicity prediction fusing protein language model-based embeddings with conventional descriptors"
A minimal, reproducible explainable-AI demo using SHAP values on tabular data. Trains RandomForest or LogisticRegression models, computes global and local feature importances, and visualizes results through summary and dependence plots, all in under 100 lines of Python.
Overview of machine learning interpretation techniques and their implementations
Integrating multimodal data through heterogeneous ensembles
This repository has all of the assignments I had to do for the Standard Bank Data Science Virtual Experience Program. 📉👨💻📊📈
Implémentation d'un modèle de scoring (OpenClassrooms | Data Scientist | Projet 7)
Model Interpretability via Hierarchical Feature Perturbation
The tasks I was required to complete as a part of the BCG Open-Access Data Science & Advanced Analytics Virtual Experience Program are all contained in this repository. 📊📈📉👨💻
Using LIME and SHAP for model interpretability of Machine Learning Black-box models.
Visualize a Decision Tree using dtreeviz
Analyzed customer churn using transaction data. Built ML model to predict lapses. Dataset includes customer status, collection/redemption info, and program tenure. Delivered business presentation outlining modeling approach, findings, and churn reduction strategies.
Successfully established a machine learning model to predict the approval status of a health insurance claim based on patient and claim characteristics, using XGBoost with SHAP-based interpretability and deployed via Streamlit.
Add a description, image, and links to the model-interpretation topic page so that developers can more easily learn about it.
To associate your repository with the model-interpretation topic, visit your repo's landing page and select "manage topics."