Meet the 2025 PyTorch Ambassadors AnnouncementsBlog Meet the 2025 PyTorch Ambassadors We’re excited to welcome the first cohort of PyTorch Ambassadors ever! The new PyTorch Ambassador…PyTorch FoundationOctober 9, 2025
SuperOffload: Unleashing the Power of Large-Scale LLM Training on Superchips Blog SuperOffload: Unleashing the Power of Large-Scale LLM Training on Superchips TLDR: Efficient full-parameter fine-tuning of GPT-OSS-20B & Qwen3-14B models on a single NVIDIA GH200 and…Xinyu Lian, Minjia Zhang (SSAIL Lab, University of Illinois Urbana-Champaign), Masahiro Tanaka (Anyscale), Olatunji Ruwase (Snowflake)October 9, 2025
Open Agent Summit at PyTorch Conference AnnouncementsBlog Open Agent Summit at PyTorch Conference As generative AI evolves beyond static prompts, this summit brings together top researchers, builders, and…PyTorch FoundationOctober 7, 2025
Snowflake Joins the PyTorch Foundation as a Premier Member AnnouncementsBlog Snowflake Joins the PyTorch Foundation as a Premier Member The PyTorch Foundation, a community-driven hub supporting the open source PyTorch framework and a broader…PyTorch FoundationOctober 7, 2025
When Quantization Isn’t Enough: Why 2:4 Sparsity Matters BlogCommunity When Quantization Isn’t Enough: Why 2:4 Sparsity Matters TL;DR Combining 2:4 sparsity with quantization offers a powerful approach to compress large language models…Mohammad Mozaffari, Jesse Cai, Supriya RaoOctober 6, 2025
Measuring Intelligence Summit at PyTorch Conference AnnouncementsBlog Measuring Intelligence Summit at PyTorch Conference The Measuring Intelligence Summit on October 21 in San Francisco, co-located with PyTorch Conference 2025,…PyTorch FoundationOctober 1, 2025
TorchAO Quantized Models and Quantization Recipes Now Available on HuggingFace Hub Blog TorchAO Quantized Models and Quantization Recipes Now Available on HuggingFace Hub PyTorch now offers native quantized variants of Phi4-mini-instruct, Qwen3, SmolLM3-3B and gemma-3-270m-it through a collaboration…Meta: Jerry Zhang, Scott Roy, Mergen Nachin, Kimish Patel, Supriya Rao, Jack Zhang, Guang Yang & Unsloth AI: Daniel HanSeptember 19, 2025
AI Infra Summit at PyTorch Conference AnnouncementsBlog AI Infra Summit at PyTorch Conference On October 21st, the AI Infra Summit comes to San Francisco and PyTorch Conference 2025,…PyTorch FoundationSeptember 18, 2025
Experience in Reducing PT2 Compilation Time for Meta Internal Workloads Blog Experience in Reducing PT2 Compilation Time for Meta Internal Workloads The Challenge of PyTorch 2.0 Compilation Since the release of PyTorch 2.0 (PT2) and its…Mingming Ding, James Wu, Oguz Ulgen, Sam Larsen, Bob Ren, Laith Sakka, Pian Pawakapan, Animesh Jain, Edward Yang, Yuzhen Huang, Ruilin Chen, Daohang Shi, Shuai Yang, Menglu Yu, Chunzhi Yang, Jade NieSeptember 18, 2025
High-performance quantized LLM inference on Intel CPUs with native PyTorch Blog High-performance quantized LLM inference on Intel CPUs with native PyTorch PyTorch 2.8 has just been released with a set of exciting new features, including a…Intel PyTorch TeamSeptember 17, 2025
PyTorch 2.8 Brings Native XCCL Support to Intel GPUs: Case Studies from Argonne National Laboratory Blog PyTorch 2.8 Brings Native XCCL Support to Intel GPUs: Case Studies from Argonne National Laboratory Intel announces a major enhancement for distributed training in PyTorch 2.8: the native integration of…Intel PyTorch Team, Argonne National LaboratorySeptember 12, 2025
Disaggregated Inference at Scale with PyTorch & vLLM BlogCommunity Disaggregated Inference at Scale with PyTorch & vLLM Key takeaways: PyTorch and vLLM have been organically integrated to accelerate cutting-edge generative AI applications,…Hongyi Jia, Jinghui Zhang, Lu Fang, Stephen Chen, Yan Cui, Ye (Charlotte) Qi, Zijing LiuSeptember 12, 2025
Distributed Checkpoint: Efficient checkpointing in large-scale jobs Blog Distributed Checkpoint: Efficient checkpointing in large-scale jobs As training jobs become larger, the likelihood of failures such as preemptions, crashes, or infrastructure…Meta: Saurabh Mishra, Meet Vadakkanchery, Pradeep Fernando, Saiteja Samudrala Google: Gerson Kroiz, Jingxin Ye, Viacheslav KovalevskyiSeptember 11, 2025
Yellow Teaming on Arm: A look inside our responsible AI workshop BlogCommunity Yellow Teaming on Arm: A look inside our responsible AI workshop A few months back, I traveled to Berlin to attend the WeAreDevelopers World Congress. During…Annie TallundSeptember 5, 2025
Fast 2-Simplicial Attention: Hardware-Efficient Kernels in TLX Blog Fast 2-Simplicial Attention: Hardware-Efficient Kernels in TLX In this blog post, we explore the kernel design details presented in the paper Fast…Sijia Chen, Timothy Chou, Aurko Roy†, Hongtao Yu, Yuanwei (Kevin) Fang, Xiaodong Wang, Jiecao Yu, Tony CW Liu†, Chuanhao Zhuge, Josh Fromm, Ying Zhang†, Rohan Anil†, Ajit MathewsSeptember 5, 2025
PyTorch 2.8+TorchAO: Unlock Efficient LLM Inference on Intel® AI PCs Blog PyTorch 2.8+TorchAO: Unlock Efficient LLM Inference on Intel® AI PCs Large Language Models (LLMs) have transformed tasks across numerous industries, including drafting emails, generating code,…Intel PyTorch TeamSeptember 3, 2025
Accelerating 2K scale pre-training up to 1.28x with TorchAO, MXFP8 and TorchTitan on Crusoe B200 Cluster Blog Accelerating 2K scale pre-training up to 1.28x with TorchAO, MXFP8 and TorchTitan on Crusoe B200 Cluster tldr: 1.22x - 1.28x training acceleration with MXFP8, equivalent convergence compared to BF16. We recently…Less Wright, Vasiliy Kuznetsov, Daniel Vega-Myhre, Driss Guessous, Hamid Shojanazeri, Elias Ellison, Martin Cala, Ethan PetersenSeptember 3, 2025
A Primer on LLM Post-Training Blog A Primer on LLM Post-Training Large Language Models (LLMs) have revolutionized how we write and consume documents. In the past…Davide TestuggineAugust 26, 2025
DRAMA Model Inference Efficiency Boosted by 1.7x-2.3x Blog DRAMA Model Inference Efficiency Boosted by 1.7x-2.3x TL;DR NJTs (Nested Jagged Tensors) boost DRAMA model inference efficiency by 1.7x-2.3x, making it more…Shreya GoyalAugust 22, 2025
ZenFlow: Stall-Free Offloading Engine for LLM Training Blog ZenFlow: Stall-Free Offloading Engine for LLM Training Introduction ZenFlow is a new extension to DeepSpeed introduced in summer 2025, designed as a…Tingfeng Lan, Yusen Wu, Bin Ma, Zhaoyuan Su, Rui Yang, Tekin Bicer, Masahiro Tanaka, Olatunji Ruwase, Dong Li, Yue ChengAugust 20, 2025