-
Nvidia & OmniML | Meta | Plus | Pure Storage | UW - Seattle | SJTU
- San Jose, CA
-
13:13
(UTC -12:00) - nvidia.com
Popular repositories Loading
-
NvChad
NvChad PublicForked from NvChad/NvChad
An attempt to make neovim cli functional like an IDE while being very beautiful, blazing fast startuptime
Lua
-
NeMo
NeMo PublicForked from NVIDIA-NeMo/NeMo
A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
Python
-
-
TensorRT-LLM
TensorRT-LLM PublicForked from NVIDIA/TensorRT-LLM
TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorR…
C++
-
TensorRT-Model-Optimizer
TensorRT-Model-Optimizer PublicForked from NVIDIA/Model-Optimizer
A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment…
Python
-
vllm
vllm PublicForked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Python
If the problem persists, check the GitHub status page or contact support.


