Feel free to contact me: [email protected]
Interpretable machine learning has gained significant prominence across various fields. Machine learning models are valued for their robust capability to capture complex relationships within data through sophisticated fitting algorithms. Complementing these models, interpretability frameworks provide essential tools for revealing such "black-box" models. These interpretable approaches deliver critical insights by ranking feature importance, identifying nonlinear response thresholds, and analyzing interaction relationships between factors.
Project mymodels, is targeting on building a tiny, user-friendly, and efficient workflow, for the scientific researchers and students who are seeking to implement interpretable machine learning in their their research works.
-
Python Proficiency
DO REMEMBER: Make a practical demo project after you finish the above learning to enhance what you have learned (i.e., a tiny web crawler). Here is one of my practice projects
-
Machine Learning Fundamentals
- Stanford CS229 provides essential theory.
-
Technical Skills
- Environment management with conda/pip
- Terminal/Command Line proficiency
- Version control with Git (My note about Git)
The above recommended tutorials are selected based solely on personal experience.
Supported platforms:
- Windows (X86) - Tested on Windows 10/11
- Linux (X86) - Tested on WSL2.0 (Ubuntu)
- macOS (ARM) - Tested on Apple Silicon (M1)
Requirements:
- Python 3.10.X
Create environment
conda env create -f requirement.yml -n mymodelsActivate Environment
conda activate mymodelsFor easy deployment on remote servers, mymodels provides Docker support with optimized build configurations for Chinese networks.
mymodels offers two Docker build strategies to suit different needs:
1. Conda-based (Dockerfile.conda) - Full compatibility
- Base image:
miniconda3(~400MB) - Uses conda environment manager
- Best package compatibility
- Recommended for complex dependency requirements
2. Slim-based (Dockerfile.slim) - Minimal size
- Base image:
python:3.10-slim(~120MB) - Uses pip package manager
- Smallest image footprint (~50% smaller)
- Recommended for production deployment with limited resources
Option 1: Conda-based (Full compatibility)
# Build with conda environment
docker build -f Dockerfile.conda -t mymodels:conda .Option 2: Slim-based (Minimal size)
# Build with slim image
docker build -f Dockerfile.slim -t mymodels:slim .Replace mymodels:conda or mymodels:slim with your chosen image tag in the commands below.
Interactive mode:
# Run with current directory mounted
docker run -it --rm -p 8888:8888 mymodels:condaOnce the container runs successfully, a URL with an access token will be displayed in the terminal. Copy and paste it into your browser to access the Jupyter interface.
Daemon mode:
You can mount external directories ~/project_results and ~/project_data to persist your results and use your own data.
# Run in background with data persistence
docker run -d \
--name mymodels_app \
-p 8888:8888 \
-v ~/project_results:/app/results \
-v ~/project_data:/app/data \
mymodels:conda
# Check logs to get Jupyter access token
docker logs mymodels_appAfter starting the container, a url with jupyter notebook token can be seen in your terminal, copy and paste into your browser.
Try the Titanic demo first
-
Binary classification: run_titanic.ipynb
Dataset source: Titanic: Machine Learning from Disaster
And then try other demos
-
Multi-class classification: run_obesity.ipynb
Dataset source: Obesity Risk Dataset
-
Regression task: run_housing.ipynb
Dataset source: Kaggle Housing Data