vllm-project/vllm projects
Search results
19 open and 0 closed projects found.
PRs and issues related to NVIDIA hardware
#31 updated Nov 16, 2025
torch.compile integration related
#12 updated Nov 15, 2025
Backlog for CI feature requests
#35 updated Nov 14, 2025
Community requests for multi-modal models
#10 updated Nov 14, 2025
Tracking failures that are occurring in CI.
#20 updated Nov 14, 2025
Tracks Ray issues and pull requests in vLLM
#7 updated Nov 14, 2025
2025-02-25: DeepSeek V3/R1 is supported with optimized block FP8 kernels, MLA, MTP spec decode, multi-node PP, EP, and W4A16 quantization
#5 updated Nov 14, 2025
Main tasks for the multi-modality workstream (#4194)
#8 updated Nov 11, 2025
Tracker of known issues and bugs for serving Llama on vLLM
#14 updated Nov 2, 2025
A list of onboarding tasks for first-time contributors to get started with vLLM.
#6 updated Oct 31, 2025
Enhancement to Llama herd of models. See also https://github.com/vllm-project/vllm/issues/16114
#13 updated Oct 18, 2025
[Testing] Optimize V1 PP efficiency.
#1 updated Oct 6, 2025
You can’t perform that action at this time.