Closed
Description
First of all, thank you for your impressive work! I've found that your model fares better than the latest LLAVA (13B) on some of my tasks.
I've tried running the GGUF version of MiniCPM-V2.0 on LocalAI v2.15.0 using the llama.cpp backend but it can't seem to load the CLIP model. I've made sure to include both the mmproj and the model files.
The loading fails with these following log lines:
8:31PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr key clip.vision.image_grid_pinpoints not found in file
8:31PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr key clip.vision.mm_patch_merge_type not found in file
8:31PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr key clip.vision.image_crop_resolution not found in file
8:31PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: failed to load vision model tensors
I'm attempting to run it on a RTX 3080 with 10GB of VRAM and I've tried using both the Q8 and the f16 version along with the mmproj from here: https://huggingface.co/mzwing/MiniCPM-V-2-GGUF
Please find the complete log below:
LocalAI (llama.cpp backend) logs
8:30PM DBG Request received: {"model":"minicpm","language":"","n":0,"top_p":null,"top_k":null,"temperature":null,"max_tokens":null,"echo":false,"batch":0,"ignore_eos":false,"repeat_pena
lty":0,"n_keep":0,"frequency_penalty":0,"presence_penalty":0,"tfz":null,"typical_p":null,"seed":null,"negative_prompt":"","rope_freq_base":0,"rope_freq_scale":0,"negative_prompt_scale":
0,"use_fast_tokenizer":false,"clip_skip":0,"tokenizer":"","file":"","response_format":{},"size":"","prompt":null,"instruction":"","input":null,"stop":null,"messages":[{"role":"user","co
ntent":[{"text":"List all the elements that you see. Do not repeat yourself.","type":"text"},{"image_url":{"url":"https://img.leboncoin.fr/api/v1/lbcpb1/images/82/03/15/8203153649130fb8
a70f4f49986280025bb71044.jpg?rule=ad-large"},"type":"image_url"}]}],"functions":null,"function_call":null,"stream":false,"mode":0,"step":0,"grammar":"","grammar_json_functions":null,"ba
ckend":"","model_base_name":""}
8:30PM DBG Configuration read: &{PredictionOptions:{Model:minicpm-v2-f16.gguf Language: N:0 TopP:0xc0000e8028 TopK:0xc0000e8020 Temperature:0xc00040e408 Maxtokens:0xc0000e8098 Echo:fals
e Batch:0 IgnoreEOS:false RepeatPenalty:1.05 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0000e8100 TypicalP:0xc0000e80f8 Seed:0xc0000e8120 NegativePrompt: RopeFreqBase:0 RopeFreq
Scale:0 NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:minicpm F16:0xc0000e80c0 Threads:0xc00040e3c0 Debug:0xc0000e8840 Roles:map[assistant:ASSISTANT: system:S
YSTEM: user:USER:] Embeddings:false Backend:llama-cpp TemplateConfig:{Chat:A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed
, and polite answers to the human's questions.
{{.Input}}
ASSISTANT:
ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{Disabl
eNoAction:false NoActionFunctionName: NoActionDescriptionName: ParallelCalls:false NoGrammar:false ResponseRegex:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNo
rmEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0000e80f0 MirostatTAU:0xc0000e80e8 Mirostat:0xc0000e80e0 NGPULayers:0xc00040e3c8 MMap:0xc00040e40
0 MMlock:0xc0000e8119 LowVRAM:0xc0000e8119 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0000e80b0 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMat
Q:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj:minicpm-mmproj.gguf Rope
Scaling:1 32000 ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false Pi
pelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} C
UDA:false DownloadFiles:[] Description: Usage:}
8:30PM DBG Parameters: &{PredictionOptions:{Model:minicpm-v2-f16.gguf Language: N:0 TopP:0xc0000e8028 TopK:0xc0000e8020 Temperature:0xc00040e408 Maxtokens:0xc0000e8098 Echo:false Batch:
0 IgnoreEOS:false RepeatPenalty:1.05 Keep:0 FrequencyPenalty:0 PresencePenalty:0 TFZ:0xc0000e8100 TypicalP:0xc0000e80f8 Seed:0xc0000e8120 NegativePrompt: RopeFreqBase:0 RopeFreqScale:0
NegativePromptScale:0 UseFastTokenizer:false ClipSkip:0 Tokenizer:} Name:minicpm F16:0xc0000e80c0 Threads:0xc00040e3c0 Debug:0xc0000e8840 Roles:map[assistant:ASSISTANT: system:SYSTEM: u
ser:USER:] Embeddings:false Backend:llama-cpp TemplateConfig:{Chat:A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and po
lite answers to the human's questions.
{{.Input}}
ASSISTANT:
ChatMessage: Completion: Edit: Functions: UseTokenizerTemplate:false} PromptStrings:[] InputStrings:[] InputToken:[] functionCallString: functionCallNameString: FunctionsConfig:{Disabl
eNoAction:false NoActionFunctionName: NoActionDescriptionName: ParallelCalls:false NoGrammar:false ResponseRegex:} FeatureFlag:map[] LLMConfig:{SystemPrompt: TensorSplit: MainGPU: RMSNo
rmEps:0 NGQA:0 PromptCachePath: PromptCacheAll:false PromptCacheRO:false MirostatETA:0xc0000e80f0 MirostatTAU:0xc0000e80e8 Mirostat:0xc0000e80e0 NGPULayers:0xc00040e3c8 MMap:0xc00040e40
0 MMlock:0xc0000e8119 LowVRAM:0xc0000e8119 Grammar: StopWords:[] Cutstrings:[] TrimSpace:[] TrimSuffix:[] ContextSize:0xc0000e80b0 NUMA:false LoraAdapter: LoraBase: LoraScale:0 NoMulMat
Q:false DraftModel: NDraft:0 Quantization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj:minicpm-mmproj.gguf Rope
Scaling:1 32000 ModelType: YarnExtFactor:0 YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0} AutoGPTQ:{ModelBaseName: Device: Triton:false UseFastTokenizer:false} Diffusers:{CUDA:false Pi
pelineType: SchedulerType: EnableParameters: CFGScale:0 IMG2IMG:false ClipSkip:0 ClipModel: ClipSubFolder: ControlNet:} Step:0 GRPC:{Attempts:0 AttemptsSleepTime:0} VallE:{AudioPath:} C
UDA:false DownloadFiles:[] Description: Usage:}
8:30PM DBG Prompt (before templating): USER:[img-0]List all the elements that you see. Do not repeat yourself.
8:30PM DBG Template found, input modified to: A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the h
uman's questions.
USER:[img-0]List all the elements that you see. Do not repeat yourself.
ASSISTANT:
8:30PM DBG Prompt (after templating): A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's q
uestions.
USER:[img-0]List all the elements that you see. Do not repeat yourself.
ASSISTANT:
8:30PM INF Loading model 'minicpm-v2-f16.gguf' with backend llama-cpp
8:30PM DBG Stopping all backends except 'minicpm-v2-f16.gguf'
8:30PM DBG Loading model in memory from file: /models/minicpm-v2-f16.gguf
8:30PM DBG Loading Model minicpm-v2-f16.gguf with gRPC (file: /models/minicpm-v2-f16.gguf) (backend: llama-cpp): {backendString:llama-cpp model:minicpm-v2-f16.gguf threads:11 assetDir:/
tmp/localai/backend_data context:{emptyCtx:{}} gRPCOptions:0xc00019b800 externalBackends:map[autogptq:/build/backend/python/autogptq/run.sh bark:/build/backend/python/bark/run.sh coqui:
/build/backend/python/coqui/run.sh diffusers:/build/backend/python/diffusers/run.sh exllama:/build/backend/python/exllama/run.sh exllama2:/build/backend/python/exllama2/run.sh huggingfa
ce-embeddings:/build/backend/python/sentencetransformers/run.sh mamba:/build/backend/python/mamba/run.sh parler-tts:/build/backend/python/parler-tts/run.sh petals:/build/backend/python/
petals/run.sh rerankers:/build/backend/python/rerankers/run.sh sentencetransformers:/build/backend/python/sentencetransformers/run.sh transformers:/build/backend/python/transformers/run
.sh transformers-musicgen:/build/backend/python/transformers-musicgen/run.sh vall-e-x:/build/backend/python/vall-e-x/run.sh vllm:/build/backend/python/vllm/run.sh] grpcAttempts:20 grpcA
ttemptsDelay:2 singleActiveBackend:true parallelRequests:false}
8:30PM DBG Loading GRPC Process: /tmp/localai/backend_data/backend-assets/grpc/llama-cpp
8:30PM DBG GRPC Service for minicpm-v2-f16.gguf will be running at: '127.0.0.1:33079'
8:30PM DBG GRPC Service state dir: /tmp/go-processmanager4275177599
8:30PM DBG GRPC Service Started
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stdout Server listening on 127.0.0.1:33079
8:30PM DBG GRPC Service Ready
8:30PM DBG GRPC: Loading model with options: {state:{NoUnkeyedLiterals:{} DoNotCompare:[] DoNotCopy:[] atomicMessageInfo:<nil>} sizeCache:0 unknownFields:[] Model:minicpm-v2-f16.gguf Co
ntextSize:512 Seed:1552041902 NBatch:512 F16Memory:false MLock:false MMap:true VocabOnly:false LowVRAM:false Embeddings:false NUMA:false NGPULayers:30 MainGPU: TensorSplit: Threads:11 L
ibrarySearchPath: RopeFreqBase:0 RopeFreqScale:0 RMSNormEps:0 NGQA:0 ModelFile:/models/minicpm-v2-f16.gguf Device: UseTriton:false ModelBaseName: UseFastTokenizer:false PipelineType: Sc
hedulerType: CUDA:false CFGScale:0 IMG2IMG:false CLIPModel: CLIPSubfolder: CLIPSkip:0 ControlNet: Tokenizer: LoraBase: LoraAdapter: LoraScale:0 NoMulMatQ:false DraftModel: AudioPath: Qu
antization: GPUMemoryUtilization:0 TrustRemoteCode:false EnforceEager:false SwapSpace:0 MaxModelLen:0 TensorParallelSize:0 MMProj:minicpm-mmproj.gguf RopeScaling:1 32000 YarnExtFactor:0
YarnAttnFactor:0 YarnBetaFast:0 YarnBetaSlow:0 Type:}
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stdout {"timestamp":1716323441,"level":"INFO","function":"load_model","line":449,"message":"Multi Modal Mode Enabled"}
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: description: image encoder for LLaVA
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: GGUF version: 3
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: alignment: 32
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: n_tensors: 440
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: n_kv: 18
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: ftype: f16
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: loaded meta data with 18 key-value pairs and 440 tensors from /models/minicpm-mmproj.gguf
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 0: general.architecture str = clip
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 1: clip.has_text_encoder bool = false
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 2: clip.has_vision_encoder bool = true
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 3: clip.has_llava_projector bool = true
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 4: general.file_type u32 = 1
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 5: general.description str = image encoder for LLaVA
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 6: clip.projector_type str = resampler
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 7: clip.vision.image_size u32 = 448
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 8: clip.vision.patch_size u32 = 14
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 9: clip.vision.embedding_length u32 = 1152
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 10: clip.vision.feed_forward_length u32 = 4304
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 11: clip.vision.projection_dim u32 = 0
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 12: clip.vision.attention.head_count u32 = 16
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 13: clip.vision.attention.layer_norm_epsilon f32 = 0.000001
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 14: clip.vision.block_count u32 = 26
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 15: clip.vision.image_mean arr[f32,3] = [0.500000, 0.500000, 0.500000]
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 16: clip.vision.image_std arr[f32,3] = [0.500000, 0.500000, 0.500000]
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - kv 17: clip.use_gelu bool = true
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - type f32: 277 tensors
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: - type f16: 163 tensors
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr ggml_cuda_init: found 1 CUDA devices:
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr Device 0: NVIDIA GeForce RTX 3080, compute capability 8.6, VMM: yes
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: CLIP using CUDA backend
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: text_encoder: 0
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: vision_encoder: 1
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: llava_projector: 1
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: model size: 828.18 MB
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: metadata size: 0.17 MB
8:30PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: params backend buffer size = 828.18 MB (440 tensors)
8:31PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr key clip.vision.image_grid_pinpoints not found in file
8:31PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr key clip.vision.mm_patch_merge_type not found in file
8:31PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr key clip.vision.image_crop_resolution not found in file
8:31PM DBG GRPC(minicpm-v2-f16.gguf-127.0.0.1:33079): stderr clip_model_load: failed to load vision model tensors
8:31PM ERR Server error error="could not load model: rpc error: code = Unknown desc = Unexpected error in RPC handling"
I'm not sure what might be causing the loading to fail.
Thank you!