Thanks to visit codestin.com
Credit goes to github.com

Skip to content
View shengliangxu's full-sized avatar
  • Nvidia & OmniML | Meta | Plus | Pure Storage | UW - Seattle | SJTU
  • San Jose, CA
  • 13:13 (UTC -12:00)

Block or report shengliangxu

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Popular repositories Loading

  1. NvChad NvChad Public

    Forked from NvChad/NvChad

    An attempt to make neovim cli functional like an IDE while being very beautiful, blazing fast startuptime

    Lua

  2. NeMo NeMo Public

    Forked from NVIDIA-NeMo/NeMo

    A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)

    Python

  3. typing-word typing-word Public

    Forked from zyronon/TypeWords

    在网页上背单词

    Vue

  4. TensorRT-LLM TensorRT-LLM Public

    Forked from NVIDIA/TensorRT-LLM

    TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizations to perform inference efficiently on NVIDIA GPUs. TensorR…

    C++

  5. TensorRT-Model-Optimizer TensorRT-Model-Optimizer Public

    Forked from NVIDIA/Model-Optimizer

    A unified library of state-of-the-art model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresses deep learning models for downstream deployment…

    Python

  6. vllm vllm Public

    Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python