Skip to content
View leideng's full-sized avatar

Block or report leideng

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please donโ€™t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this userโ€™s behavior. Learn more about reporting abuse.

Report abuse
leideng/README.md

Hi there ๐Ÿ‘‹

  • ๐Ÿš€ I train and infer LLMs on Ascend NPUs.
  • ๐Ÿง  My current research interest is efficient AI, with a particular focus on sparse attention.
  • ๐ŸŽ“ I am both a researcher and an engineer ๐Ÿ› ๏ธ.
  • ๐Ÿ”ง I design algorithms and build systems that work.
  • โšก I believe that in the LLM era, while an idea is important, the ability to quickly and efficiently implement that idea is even more vital.

Pinned Loading

  1. AI-primer AI-primer Public

    Jupyter Notebook 1

  2. vllm vllm Public

    Forked from vllm-project/vllm

    A high-throughput and memory-efficient inference and serving engine for LLMs

    Python 1

  3. vllm-ascend vllm-ascend Public

    Forked from vllm-project/vllm-ascend

    Community maintained hardware plugin for vLLM on Ascend

    Python

  4. unified-cache-management unified-cache-management Public

    Forked from ModelEngine-Group/unified-cache-management

    Persist and reuse KV Cache to speedup your LLM.

    Python