Skip to content

[Bug] 0.10.1版本多模态大模型请求报文中未送任何text提示词时会直接报错。0.10.2版本也未解决。 #4145

@waveman800

Description

@waveman800

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.

Describe the bug

0.10.1版本多模态大模型请求报文中未送任何text提示词时会直接报错,应该兼容下为空仍会送空字符给大模型。

Reproduction

docker run -d --name lmdeploy_Qwen2.5-VL-32B-Instruct
--runtime=nvidia --gpus "device=0"
-v /home/devuser/models/other-models:/root/model
-p 8015:23333
--ipc=host
harbor.bhidi.com/ai/lmdeploy:0.10.1-long-cuda118-fixed1
sh -c "pip install qwen_vl_utils &&
lmdeploy serve api_server /root/model/Qwen2.5-VL-32B-Instruct-AWQ
--model-name Qwen2.5-VL-32B-Instruct-AWQ
--session-len 120000
--cache-max-entry-count 0.8
--max-batch-size 4
--backend turbomind
--enable-dynamic-compression
--importance-threshold 0.6
--min-compression-ratio 0.5
--max-compression-ratio 0.8
--segment-strategy paragraph"

Environment

sys.platform: linux
Python: 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0]
CUDA available: True
MUSA available: False
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce RTX 4090
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.8, V11.8.89
GCC: x86_64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04.2) 11.4.0
PyTorch: 2.6.0+cu118
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2024.2-Product Build 20240605 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v3.5.3 (Git Hash 66f0cb9eb66affd2da3bf5f8d897376f04aae6af)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 11.8
  - NVCC architecture flags: -gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_90,code=sm_90
  - CuDNN 90.1
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, COMMIT_SHA=2236df1770800ffea5697b11b0bb0d910b2e59e1, CUDA_VERSION=11.8, CUDNN_VERSION=9.1.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_FBGEMM -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, TORCH_VERSION=2.6.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=1, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=ON, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF, 

TorchVision: 0.21.0+cu118
LMDeploy: 0.10.1+
transformers: 4.56.1
fastapi: 0.121.3
pydantic: 2.12.4
triton: 3.2.0
NVIDIA Topology: 
	GPU0	CPU Affinity	NUMA Affinity	GPU NUMA ID
GPU0	 X 	0-15	0		N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

Error traceback

ERROR:    Exception in ASGI application

+ Exception Group Traceback (most recent call last):

|   File "/opt/py3/lib/python3.10/site-packages/starlette/_utils.py", line 79, in collapse_excgroups

|     yield

|   File "/opt/py3/lib/python3.10/site-packages/starlette/responses.py", line 270, in __call__

|     async with anyio.create_task_group() as task_group:

|   File "/opt/py3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 781, in __aexit__

|     raise BaseExceptionGroup(

| exceptiongroup.ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)

+-+---------------- 1 ----------------

| Traceback (most recent call last):

|   File "/opt/py3/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 403, in run_asgi

|     result = await app(  # type: ignore[func-returns-value]

|   File "/opt/py3/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 60, in __call__

|     return await self.app(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/fastapi/applications.py", line 1134, in __call__

|     await super().__call__(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/applications.py", line 107, in __call__

|     await self.middleware_stack(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in __call__

|     raise exc

|   File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in __call__

|     await self.app(scope, receive, _send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in __call__

|     await self.app(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 63, in __call__

|     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app

|     raise exc

|   File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app

|     await app(scope, receive, sender)

|   File "/opt/py3/lib/python3.10/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__

|     await self.app(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 716, in __call__

|     await self.middleware_stack(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 736, in app

|     await route.handle(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/routing.py", line 290, in handle

|     await self.app(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/fastapi/routing.py", line 125, in app

|     await wrap_app_handling_exceptions(app, request)(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app

|     raise exc

|   File "/opt/py3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 42, in wrapped_app

|     await app(scope, receive, sender)

|   File "/opt/py3/lib/python3.10/site-packages/fastapi/routing.py", line 112, in app

|     await response(scope, receive, send)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/responses.py", line 269, in __call__

|     with collapse_excgroups():

|   File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__

|     self.gen.throw(typ, value, traceback)

|   File "/opt/py3/lib/python3.10/site-packages/starlette/_utils.py", line 85, in collapse_excgroups

|     raise exc

|   File "/opt/py3/lib/python3.10/site-packages/starlette/responses.py", line 273, in wrap

|     await func()

|   File "/opt/py3/lib/python3.10/site-packages/starlette/responses.py", line 253, in stream_response

|     async for chunk in self.body_iterator:

|   File "/opt/py3/lib/python3.10/site-packages/lmdeploy/serve/openai/api_server.py", line 489, in completion_stream_generator

|     async for res in result_generator:

|   File "/opt/py3/lib/python3.10/site-packages/lmdeploy/serve/async_engine.py", line 765, in generate

|     prompt_input = await self._get_prompt_input(prompt,

|   File "/opt/py3/lib/python3.10/site-packages/lmdeploy/serve/vl_async_engine.py", line 99, in _get_prompt_input

|     results = await self.vl_encoder.wrap_for_turbomind(results,

|   File "/opt/py3/lib/python3.10/site-packages/lmdeploy/vl/engine.py", line 124, in wrap_for_turbomind

|     result = self.model.to_turbomind(messages,

|   File "/opt/py3/lib/python3.10/site-packages/lmdeploy/vl/model/qwen2.py", line 186, in to_turbomind

|     prompt, IMAGE_TOKEN = self.proc_messages(messages, chat_template, sequence_start)

|   File "/opt/py3/lib/python3.10/site-packages/lmdeploy/vl/model/qwen2.py", line 140, in proc_messages

|     prompt = content[0]

| IndexError: list index out of range

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions