Megatron-Core (MCore): Composable library with GPU-optimized building blocks for custom training frameworks. You can install this library using pip or use it within the Megatron-LM GitHub repository.
Megatron-LM: Reference implementation that includes end-to-end examples utilizing Megatron Core.
Megatron-Bridge: Training library with bidirectional Hugging Face β Megatron checkpoint conversion, flexible training loops, and example model training recipes.
For more information, refer to Megatron Bridge.
Install Megatron Core with pip:
-
Install Megatron Core with required dependencies:
pip install --no-build-isolation megatron-core[mlm,dev]
-
Clone repository for examples:
git clone https://github.com/NVIDIA/Megatron-LM.git cd Megatron-LM pip install --no-build-isolation .[mlm,dev]
- [2025/12] π Megatron Core development has moved to GitHub! All development and CI now happens in the open. We welcome community contributions.
- [2025/10] Megatron Dev Branch - early access branch with experimental features.
- [2025/10] Megatron Bridge - Bidirectional converter for interoperability between Hugging Face and Megatron checkpoints, featuring production-ready recipes for popular models.
- [2025/08] MoE Q3-Q4 2025 Roadmap - Comprehensive roadmap for MoE features including DeepSeek-V3, Qwen3, advanced parallelism strategies, FP8 optimizations, and Blackwell performance enhancements.
- [2025/08] GPT-OSS Model - Advanced features including YaRN RoPE scaling, attention sinks, and custom activation functions are being integrated into Megatron Core.
- [2025/06] Megatron MoE Model Zoo - Best practices and optimized configurations for training DeepSeek-V3, Mixtral, and Qwen3 MoE models with performance benchmarking and checkpoint conversion tools.
- [2025/05] Megatron Core v0.11.0 brings new capabilities for multi-data center LLM training (blog).
Previous News
- [2024/07] Megatron Core v0.7 improves scalability and training resiliency and adds support for multimodal training (blog).
- [2024/06] Megatron Core added supports for Mamba-based models. Check out our paper An Empirical Study of Mamba-based Language Models and code example.
- [2024/01 Announcement] NVIDIA has released the core capabilities in Megatron-LM into Megatron Core in this repository. Megatron Core expands upon Megatron-LM's GPU-optimized techniques with more cutting-edge innovations on system-level optimizations, featuring composable and modular APIs. Explore the [Megatron Core intro](#Megatron Core) for more details.
Megatron-LM/
βββ megatron/
β βββ core/ # Megatron Core (kernels, parallelism, building blocks)
β β βββ models/ # Transformer models
β β βββ transformer/ # Transformer building blocks
β β βββ tensor_parallel/ # Tensor parallelism
β β βββ pipeline_parallel/ # Pipeline parallelism
β β βββ distributed/ # Distributed training (FSDP, DDP)
β β βββ optimizer/ # Optimizers
β β βββ datasets/ # Dataset loaders
β β βββ inference/ # Inference engines
β β βββ export/ # Model export (e.g. TensorRT-LLM)
β βββ training/ # Training scripts
β βββ inference/ # Inference server
β βββ legacy/ # Legacy components
β βββ post_training/ # Post-training (RLHF, etc.)
βββ examples/ # Ready-to-use training examples
βββ tools/ # Utility tools
βββ tests/ # Comprehensive test suite
βββ docs/ # Documentation
For our latest performance benchmarking results, please refer to NVIDIA NeMo Framework Performance Summary.
Our codebase efficiently trains models from 2B to 462B parameters across thousands of GPUs, achieving up to 47% Model FLOP Utilization (MFU) on H100 clusters.
Benchmark Configuration:
- Vocabulary size: 131,072 tokens
- Sequence length: 4096 tokens
- Model scaling: Varied hidden size, attention heads, and layers to achieve target parameter counts
- Communication optimizations: Fine-grained overlapping with DP (
--overlap-grad-reduce,--overlap-param-gather), TP (--tp-comm-overlap), and PP (enabled by default)
Key Results:
- 6144 H100 GPUs: Successfully benchmarked 462B parameter model training
- Superlinear scaling: MFU increases from 41% to 47-48% with model size
- End-to-end measurement: Throughputs include all operations (data loading, optimizer steps, communication, logging)
- Production ready: Full training pipeline with checkpointing and fault tolerance
- Note: Performance results measured without training to convergence
Our weak scaled results show superlinear scaling (MFU increases from 41% for the smallest model considered to 47-48% for the largest models); this is because larger GEMMs have higher arithmetic intensity and are consequently more efficient to execute.
We also strong scaled the standard GPT-3 model (our version has slightly more than 175 billion parameters due to larger vocabulary size) from 96 H100 GPUs to 4608 GPUs, using the same batch size of 1152 sequences throughout. Communication becomes more exposed at larger scale, leading to a reduction in MFU from 47% to 42%.
- π Documentation - Official documentation
- π Issues - Bug reports and feature requests
We β€οΈ contributions! Ways to contribute:
- π Report bugs - Help us improve reliability
- π‘ Suggest features - Shape the future of Megatron Core
- π Improve docs - Make Megatron Core more accessible
- π§ Submit PRs - Contribute code improvements
If you use Megatron in your research or project, we appreciate that you use the following citations:
@article{megatron-lm,
title={Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism},
author={Shoeybi, Mohammad and Patwary, Mostofa and Puri, Raul and LeGresley, Patrick and Casper, Jared and Catanzaro, Bryan},
journal={arXiv preprint arXiv:1909.08053},
year={2019}
}