- ๐ I train and infer LLMs on Ascend NPUs.
- ๐ง My current research interest is efficient AI, with a particular focus on sparse attention.
- ๐ I am both a researcher and an engineer ๐ ๏ธ.
- ๐ง I design algorithms and build systems that work.
- โก I believe that in the LLM era, while an idea is important, the ability to quickly and efficiently implement that idea is even more vital.
Pinned Loading
-
vllm
vllm PublicForked from vllm-project/vllm
A high-throughput and memory-efficient inference and serving engine for LLMs
Python 1
-
vllm-ascend
vllm-ascend PublicForked from vllm-project/vllm-ascend
Community maintained hardware plugin for vLLM on Ascend
Python
-
unified-cache-management
unified-cache-management PublicForked from ModelEngine-Group/unified-cache-management
Persist and reuse KV Cache to speedup your LLM.
Python
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.



