-
Huawei 2012 Labs., Compiler Lab.
- Beijing, China
- https://chenglong92.github.io
Pinned Loading
-
-
FlashAttention
FlashAttention PublicFlash Attention Code Study for Large Language Model(LLM).
C++ 4
-
LLM4Compiler
LLM4Compiler PublicThe Next-generation Innovation of Code Optimization and Compilers in the Era of LLM
-
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.