Skip to content

Conversation

@mengker33
Copy link

@mengker33 mengker33 commented Nov 6, 2025

Purpose

Enable PaddleOCR-VL model: https://huggingface.co/PaddlePaddle/PaddleOCR-VL

Test CMD:

Launch server:
VLLM_SKIP_WARMUP=true PT_HPU_LAZY_MODE=1 vllm serve PaddlePaddle/PaddleOCR-VL --host 0.0.0.0 --port 8080 --trust-remote-code --gpu-memory-utilization 0.5 --max-model-len 16384 --served-model-name 'PaddleOCR-VL-0.9B'

To Use paddleocr as client:

  • First install paddle related packages:

python -m pip install paddlepaddle==3.2.0 -i https://www.paddlepaddle.org.cn/packages/stable/cpu/
pip install paddlex==3.3.4
python -m pip install -U "paddleocr[doc-parser]"
python -m pip install https://paddle-whl.bj.bcebos.com/nightly/cu126/safetensors/safetensors-0.6.2.dev0-cp38-abi3-linux_x86_64.whl

  • Then call the PaddleOCR CLI:

paddleocr doc_parser -i https://paddle-model-ecology.bj.bcebos.com/paddlex/imgs/demo_image/paddleocr_vl_demo.png --enable_mkldnn False --vl_rec_backend vllm-server --vl_rec_server_url http://127.0.0.1:8080/v1 --save_path ./output

@mengker33 mengker33 changed the title Enable paddleocr_vl Enable PaddleOCR-VL Nov 6, 2025
@mengker33 mengker33 force-pushed the enable_paddleocr branch 8 times, most recently from c8d2bc6 to 8340f96 Compare November 13, 2025 05:28
@mengker33 mengker33 force-pushed the enable_paddleocr branch 3 times, most recently from 83f7661 to 7c7de46 Compare November 13, 2025 06:03
Copy link

@Wei-Lin-Intel Wei-Lin-Intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link

@czhu15 czhu15 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@czhu15 czhu15 merged commit e8b7864 into HabanaAI:aice/v1.22.0 Nov 24, 2025
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants