Inference-only distribution of Demucs for PyTorch 2.x
High-quality audio source separation models for extracting vocals, drums, bass, and other instruments from music tracks.
demucs-infer is a streamlined, inference-only version of Demucs by Meta AI Research, optimized for PyTorch 2.x with minimal dependencies.
- PyTorch 2.x Support: Compatible with modern PyTorch versions (no
torchaudio<2.1restriction) - Inference-Only: ~50% smaller than original package (removed training code)
- Minimal Dependencies: 7 core packages (vs 15+ in original)
- API Compatible: Drop-in replacement for inference workflows
- Same Quality: Zero changes to separation algorithms
- All Models Supported: HTDemucs, MDX, and all variants
- Model Info API: Query model capabilities, separation types, and source translations
- Third-Party Model Support: Compatible with community models (drumsep, cinematic, etc.)
No installation needed! Try the demo directly in Google Colab:
demucs-infer is built upon the groundbreaking work of Demucs by Alexandre DΓ©fossez and Meta AI Research. The original Demucs represents a major advancement in music source separation, achieving state-of-the-art results through innovative hybrid architectures and transformer-based approaches.
The models in this package are based on two pioneering research papers:
Hybrid Spectrogram and Waveform Source Separation
This seminal work introduced the hybrid time-frequency domain approach that significantly improved separation quality by combining the strengths of both spectrogram and waveform-based processing.
@inproceedings{defossez2021hybrid,
title={Hybrid Spectrogram and Waveform Source Separation},
author={D{\'e}fossez, Alexandre},
booktitle={Proceedings of the ISMIR 2021 Workshop on Music Source Separation},
year={2021}
}Hybrid Transformers for Music Source Separation
This follow-up research integrated transformer architectures into the hybrid approach, further pushing the boundaries of separation quality and establishing new benchmarks in the field.
@article{rouard2022hybrid,
title={Hybrid Transformers for Music Source Separation},
author={Rouard, Simon and Massa, Francisco and D{\'e}fossez, Alexandre},
journal={arXiv preprint arXiv:2211.08553},
year={2022}
}If you use demucs-infer in your research, please cite the original Demucs papers above. This package is merely a maintenance fork to ensure continued compatibility with modern PyTorch versions - all credit for the models, algorithms, and research belongs to the original authors.
Note: The original Demucs repository is no longer actively maintained by Meta AI Research. This package was created to continue the excellent work by providing ongoing maintenance and PyTorch 2.x compatibility for the inference capabilities, while preserving 100% of the original model quality and algorithms.
What we maintain:
- PyTorch 2.x compatibility
- Modern dependency management
- Inference-only packaging
What remains unchanged:
- All model architectures (100% original)
- All separation algorithms (100% original)
- All model weights (100% original)
- Audio quality (100% identical to original)
demucs-infer is available on PyPI and supports both UV (recommended, faster) and pip (traditional) installation methods.
UV is a blazing-fast Python package installer and resolver.
# Install UV if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Add to existing project
uv add demucs-infer
# Or create new project with demucs-infer
uv init my-audio-project
cd my-audio-project
uv add demucs-infer
# Run Python with demucs-infer available
uv run python your_script.pyBenefits of UV:
- β‘ 10-100x faster than pip
- π Automatic virtual environment management
- π¦ Consistent dependency resolution
- π― Works seamlessly with PyPI packages
# Install in current environment
pip install demucs-infer
# Or create virtual environment first (recommended)
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
pip install demucs-inferfrom demucs_infer.pretrained import get_model
from demucs_infer.apply import apply_model
from demucs_infer.audio import save_audio
import torch
import torchaudio
# Load model
model = get_model("htdemucs_ft")
model.eval()
# Load audio
wav, sr = torchaudio.load("song.wav")
wav = wav.unsqueeze(0) # Add batch dimension
# Separate audio
with torch.no_grad():
sources = apply_model(model, wav, device="cuda")
# Save separated stems
# sources shape: [1, 4, channels, time]
# sources order: drums, bass, other, vocals
for i, source_name in enumerate(model.sources):
source = sources[0, i] # Remove batch dimension
save_audio(source, f"output/{source_name}.wav", sr)With UV:
# Basic usage
uv run demucs-infer "song.wav"
# Extract specific stems (drums only)
uv run demucs-infer --two-stems=drums "song.wav"
# Use specific model
uv run demucs-infer -n htdemucs_ft "song.wav"
# Specify output directory
uv run demucs-infer -o output/ "song.wav"With pip:
# Basic usage
demucs-infer "song.wav"
# Extract specific stems (drums only)
demucs-infer --two-stems=drums "song.wav"
# Use specific model
demucs-infer -n htdemucs_ft "song.wav"
# Specify output directory
demucs-infer -o output/ "song.wav"The original Demucs repository is no longer actively maintained by Meta AI Research. While the models remain state-of-the-art, the package has not received updates for modern PyTorch versions.
demucs-infer was created to:
- Maintain compatibility - Keep working with PyTorch 2.x and Python 3.10+
- Continue development - Address issues and improve user experience
- Focus on inference - Remove training code for a leaner package
- Serve the community - Ensure researchers and developers can keep using these excellent models
| Feature | Original Demucs | demucs-infer |
|---|---|---|
| Maintenance Status | β Active | |
| PyTorch Support | 1.8.x - 2.0.x (with torchaudio<2.1) |
2.0+ (no restrictions) β |
| Package Size | ~Full codebase | ~50% smaller β |
| Dependencies | 15+ packages | 7 core packages β |
| Training Code | β Included | β Removed (inference-only) |
| Inference Code | β Included | β Included |
| CLI Command | demucs |
demucs-infer (no conflicts) |
| Import Name | demucs |
demucs_infer (no conflicts) |
| Model Weights | β Same repositories | β Same repositories |
| Audio Quality | β High quality | β Same quality (zero algorithm changes) |
| Model | Quality | Speed | Description |
|---|---|---|---|
htdemucs |
βββββ | Medium | Hybrid Transformer Demucs (default) |
htdemucs_ft |
βββββ | Medium | Fine-tuned version (recommended) |
mdx |
ββββ | Fast | MDX model |
mdx_extra |
βββββ | Medium | Enhanced MDX |
mdx_q |
βββ | Very Fast | Quantized MDX |
mdx_extra_q |
ββββ | Fast | Quantized enhanced MDX |
| Model | Quality | Speed | Description |
|---|---|---|---|
htdemucs_6s |
βββββ | Medium | 6-source separation |
demucs-infer supports loading community-trained Demucs models. Place .th model files in a local directory and use the repo parameter.
| Model Signature | Name | Separation Type | Sources |
|---|---|---|---|
49469ca8 |
Drumsep | Drum Kit | kick, snare, cymbals, toms |
97d170e1 |
CDX23 Cinematic | Film/Video | dialog, music, sfx |
phantom_center |
Phantom Center Extractor | Stereo Center/Sides | similarity, difference |
ebf34a2d |
UVR Demucs Model 1 | Vocal/Instrumental | vocals, non_vocals |
from pathlib import Path
from demucs_infer.pretrained import get_model
# Load third-party model from local directory
model = get_model("49469ca8", repo=Path("/path/to/models"))# Load specific model
model = get_model("htdemucs_ft") # Best quality
model = get_model("mdx") # Faster
model = get_model("htdemucs_6s") # 6 sources- Extract vocals for remixing
- Isolate drums for sampling
- Remove vocals for karaoke tracks
- Separate instruments for analysis
- Prepare training data for music ML models
- Audio preprocessing for downstream tasks
- Dataset augmentation
- Music information retrieval (MIR)
- Audio signal processing research
- Music transcription
Query model capabilities, separation types, and get source name translations programmatically.
from demucs_infer.api import get_model_info, list_supported_separation_types
# Get detailed info about a model
info = get_model_info("htdemucs_ft")
print(info)
# Output:
# HT-Demucs Fine-tuned (htdemucs_ft)
# Type: Music Separation (4 stems)
# Architecture: HTDemucs (ensemble of 4)
# Sources: drums, bass, other, vocals
# Sample Rate: 44100 Hz
# Use Case: High quality music separation
# Access individual properties
print(info.sources) # ['drums', 'bass', 'other', 'vocals']
print(info.separation_type) # 'music_4stem'
print(info.is_bag) # True (ensemble model)
print(info.num_models) # 4
# Get info for third-party model with source translation
from pathlib import Path
info = get_model_info("49469ca8", repo=Path("/path/to/drumsep"))
print(info.sources) # ['bombo', 'redoblante', 'platillos', 'toms'] (Spanish)
print(info.sources_english) # ['kick', 'snare', 'cymbals', 'toms'] (English)from demucs_infer.api import list_supported_separation_types
types = list_supported_separation_types()
for key, info in types.items():
print(f"{key}: {info['name']}")
# Output:
# music_4stem: Music Separation (4 stems)
# music_6stem: Music Separation (6 stems)
# drum_kit: Drum Kit Separation
# cinematic: Cinematic/Film Audio Separation
# speech: Speech Separation
# stereo_center: Stereo Center/Sides Separation
# vocal_instrumental: Vocal/Instrumental Separation| Property | Type | Description |
|---|---|---|
name |
str | Model name/signature |
display_name |
str | Human-readable name |
architecture |
str | Model architecture (HTDemucs, HDemucs, etc.) |
sources |
List[str] | Original source names |
sources_english |
List[str] | English-translated source names |
separation_type |
str | Type key (e.g., 'music_4stem') |
separation_type_name |
str | Human-readable type name |
description |
str | Model description |
use_case |
str | Recommended use case |
sample_rate |
int | Audio sample rate (Hz) |
audio_channels |
int | Number of audio channels |
is_bag |
bool | Whether it's an ensemble model |
num_models |
int | Number of models in ensemble |
# CLI: Extract drums only (faster than 4-source)
demucs-infer --two-stems=drums "song.wav"# Python API: Extract specific stem
model = get_model("htdemucs_ft")
# Model will automatically optimize for two-stem separationimport torch
from pathlib import Path
from demucs_infer.pretrained import get_model
from demucs_infer.apply import apply_model
import torchaudio
model = get_model("htdemucs_ft").cuda().eval()
audio_files = list(Path("input/").glob("*.wav"))
for audio_file in audio_files:
wav, sr = torchaudio.load(str(audio_file))
wav = wav.unsqueeze(0).cuda()
with torch.no_grad():
sources = apply_model(model, wav, device="cuda")
output_dir = Path("output") / audio_file.stem
output_dir.mkdir(parents=True, exist_ok=True)
for i, source_name in enumerate(model.sources):
save_audio(sources[0, i].cpu(), output_dir / f"{source_name}.wav", sr)
print(f"β
Processed: {audio_file.name}")import torch
# Auto-detect best device
device = "cuda" if torch.cuda.is_available() else "cpu"
model = get_model("htdemucs_ft")
model = model.to(device)
model.eval()
# Or specify explicitly
model = model.to("cuda:0") # GPU 0
model = model.to("cpu") # CPU- Migration Guide - Migrate from original Demucs
- Implementation Notes - Technical details
- Test Examples - Import verification
torch>=2.0.0
torchaudio>=2.0.0
einops
julius>=0.2.3
openunmix
pyyaml
tqdmWith UV:
# For MP3 output support
uv add "demucs-infer[mp3]"
# For quantized models
uv add "demucs-infer[quantized]"
# Or install all optional features
uv add "demucs-infer[mp3,quantized]"With pip:
# For MP3 output support
pip install demucs-infer[mp3] # Adds: lameenc>=1.2
# For quantized models
pip install demucs-infer[quantized] # Adds: diffq>=0.2.1
# Or install all optional features
pip install "demucs-infer[mp3,quantized]"With UV:
# Install in editable mode from local directory
cd /path/to/demucs-infer
uv pip install -e ".[dev]"
# Or add to your project as editable dependency
uv add -e ../path/to/demucs-inferWith pip:
# Install in editable mode from local directory
cd /path/to/demucs-infer
# Create virtual environment
python -m venv .venv
source .venv/bin/activate
# Install in editable mode with dev dependencies
pip install -e ".[dev]"The package includes a comprehensive test suite using pytest:
# Run all tests
uv run pytest tests/ -v
# Run specific test file
uv run pytest tests/test_log.py -v
# Run with coverage
uv run pytest tests/ --cov=demucs_infer
# Run slow tests (requires model download)
uv run pytest tests/ -v -m "slow"Continuous Integration:
- GitHub Actions automatically runs tests on every push/PR
- Tests validate both library API and CLI commands
- Python 3.10 with PyTorch 2.x compatibility verified
- Python: 3.8+
- PyTorch: 2.0 or later
- OS: Linux, macOS, Windows
- GPU: Optional (CUDA-capable GPU recommended for speed)
- PyTorch 2.x compatibility layer - Removed version restrictions
- PyTorch 2.6+ support - Compatible with
weights_onlydefault changes - Minimal logging module - Replaced dora-search dependency
- Lazy imports - Made optional dependencies truly optional
- Inference-only packaging - Removed training code
- Clean dependency tree - 7 core packages instead of 15+
- Model Info API - Query model capabilities, separation types, and source translations
- Third-party model support - Module aliasing for community-trained models
- β All separation models - HTDemucs, MDX, all variants
- β Model architectures - Zero modifications to neural networks
- β Separation algorithms - Identical audio processing
- β Model weights - Same pretrained checkpoints
- β Audio quality - 100% identical output
- β Training code (
train.py,solver.py, etc.) - β Evaluation scripts (
evaluate.py) - β Training dependencies (hydra, dora-search, omegaconf)
- β Dataset utilities (musdb, museval)
- β Distributed training tools (submitit)
| Metric | Original Demucs | demucs-infer | Improvement |
|---|---|---|---|
| Python Files | 36+ files | 17 files | ~47% smaller |
| Core Dependencies | 15+ packages | 7 packages | ~53% fewer |
| PyTorch Restriction | torchaudio<2.1 β |
No restriction β | Flexible |
| Training Code | Included | Removed | Focused |
| Inference Quality | High | Same β | Identical |
With UV:
# Make sure you added demucs-infer to your project
uv add demucs-infer
# Or run with UV
uv run python your_script.pyWith pip:
# Make sure you installed demucs-infer, not demucs
pip uninstall demucs demucsfix
pip install demucs-infer# Use smaller chunks or CPU
model = model.to("cpu")
# Or use two-stems mode (faster)
# demucs-infer --two-stems=drums "audio.wav"# Models are downloaded from official Demucs repositories
# Check internet connection and firewall settings
# Default model cache location:
# Linux: ~/.cache/torch/hub/checkpoints/
# macOS: ~/Library/Caches/torch/hub/checkpoints/
# Windows: %USERPROFILE%\.cache\torch\hub\checkpoints\MIT License (same as original Demucs)
Copyright (c) Meta Platforms, Inc. (Original Demucs) Copyright (c) 2025 (demucs-infer modifications)
See LICENSE for details.
--
- Migration Help: See MIGRATION.md
- Original Demucs: facebookresearch/demucs
Made with β€οΈ for the ML community
Based on the excellent work by Alexandre DΓ©fossez and Meta AI Research.