Skip to content

Compile bug: [QNN] Not able to run tiny llama model with QNN NPU #14

@akshatshah17

Description

@akshatshah17

Git commit

e36ad89

Operating systems

Linux

GGML backends

CPU

Problem description & steps to reproduce

I follow this procedure for build and convert the model into the quantized gguf format. But while running the model on device it is unable to load the model.

git clone https://github.com/chraac/llama.cpp.git --recursive
cd llama.cpp
git checkout dev-refactoring
export ANDROID_NDK=/home/code/Android/Ndk/android-ndk-r26d/
export QNN_SDK_PATH=/home/code/Android/qnn-sdk/qairt/2.27.5.241009/

Build for CPU
cmake -B build
cmake --build build --config Release -j16

Build for Android
cmake
-DCMAKE_TOOLCHAIN_FILE=$ANDROID_NDK/build/cmake/android.toolchain.cmake
-DANDROID_ABI=arm64-v8a
-DANDROID_PLATFORM=android-28
-DCMAKE_C_FLAGS="-march=armv8.7a"
-DCMAKE_CXX_FLAGS="-march=armv8.7a"
-DGGML_OPENMP=OFF
-DGGML_LLAMAFILE=OFF
-DGGML_QNN=ON
-DGGML_QNN_DEFAULT_LIB_SEARCH_PATH=/data/local/tmp
-B build-android
cmake --build build-android --config Release -j4
cmake --install build-android --prefix install-android --config Release

Model conversion
python3 convert_hf_to_gguf.py ~/tiny_llama/ --outfile output_file_tiny_llama_fp32.gguf --outtype f32
./build/bin/llama-quantize output_file_tiny_llama_fp32.gguf output_file_tiny_llama_Q4_K_M.gguf Q4_K_M

On S24 QC
adb push install-android/ /data/local/tmp/
adb push output_file_tiny_llama_Q4_K_M.gguf /data/local/tmp/

export LD_LIBRARY_PATH=/data/local/tmp/install-android/lib/
./install-android/bin/llama-cli -m output_file_tiny_llama_Q4_K_M.gguf -c 512 -p "prompt"

First Bad Commit

No response

Relevant log output

build: 4396 (e36ad895) with cc (Ubuntu 9.4.0-1ubuntu1~20.04.3) 9.4.0 for x86_64-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_load_model_from_file: using device qnn-gpu (Qualcomm Adreno GPU) - 7630 MiB free
llama_model_loader: loaded meta data with 29 key-value pairs and 273 tensors from output_file_SR_3B_Q4_K_M.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = SR_3B
llama_model_loader: - kv   3:                         general.size_label str              = 3.6B
llama_model_loader: - kv   4:                          llama.block_count u32              = 30
llama_model_loader: - kv   5:                       llama.context_length u32              = 1280
llama_model_loader: - kv   6:                     llama.embedding_length u32              = 3072
llama_model_loader: - kv   7:                  llama.feed_forward_length u32              = 8192
llama_model_loader: - kv   8:                 llama.attention.head_count u32              = 24
llama_model_loader: - kv   9:              llama.attention.head_count_kv u32              = 4
llama_model_loader: - kv  10:                       llama.rope.freq_base f32              = 500000.000000
llama_model_loader: - kv  11:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  12:                 llama.attention.key_length u32              = 128
llama_model_loader: - kv  13:               llama.attention.value_length u32              = 128
llama_model_loader: - kv  14:                          general.file_type u32              = 15
llama_model_loader: - kv  15:                           llama.vocab_size u32              = 105900
llama_model_loader: - kv  16:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv  17:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  18:                         tokenizer.ggml.pre str              = llama-bpe
llama_model_loader: - kv  19:                      tokenizer.ggml.tokens arr[str,105900]  = ["<|end_of_text|>", "<|begin_of_text|...
llama_model_loader: - kv  20:                  tokenizer.ggml.token_type arr[i32,105900]  = [3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, ...
llama_model_loader: - kv  21:                      tokenizer.ggml.merges arr[str,105604]  = ["Ġ Ġ", "ĠĠ ĠĠ", "Ġ t", "i n",...
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  23:                tokenizer.ggml.eos_token_id u32              = 0
llama_model_loader: - kv  24:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  25:            tokenizer.ggml.padding_token_id u32              = 0
llama_model_loader: - kv  26:                    tokenizer.chat_template str              = {% if messages[0]['role'] == 'system'...
llama_model_loader: - kv  27:            tokenizer.ggml.add_space_prefix bool             = false
llama_model_loader: - kv  28:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   61 tensors
llama_model_loader: - type q4_K:  183 tensors
llama_model_loader: - type q6_K:   29 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 53
llm_load_vocab: token to piece cache size = 0.6436 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 105900
llm_load_print_meta: n_merges         = 105604
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 1280
llm_load_print_meta: n_embd           = 3072
llm_load_print_meta: n_layer          = 30
llm_load_print_meta: n_head           = 24
llm_load_print_meta: n_head_kv        = 4
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 6
llm_load_print_meta: n_embd_k_gqa     = 512
llm_load_print_meta: n_embd_v_gqa     = 512
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 500000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 1280
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 3.58 B
llm_load_print_meta: model size       = 2.04 GiB (4.90 BPW)
llm_load_print_meta: general.name     = SR_3B
llm_load_print_meta: BOS token        = 1 '<|begin_of_text|>'
llm_load_print_meta: EOS token        = 0 '<|end_of_text|>'
llm_load_print_meta: UNK token        = 0 '<|end_of_text|>'
llm_load_print_meta: PAD token        = 0 '<|end_of_text|>'
llm_load_print_meta: LF token         = 179 'Ä'
llm_load_print_meta: FIM PRE token    = 2 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 4 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 3 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 5 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 7 '<|repo_name|>'
llm_load_print_meta: EOG token        = 0 '<|end_of_text|>'
llm_load_print_meta: EOG token        = 5 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 7 '<|repo_name|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/31 layers to GPU
llm_load_tensors:   CPU_Mapped model buffer size =  2091.15 MiB
..................................................................................
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 512
llama_new_context_with_model: n_ctx_per_seq = 512
llama_new_context_with_model: n_batch       = 512
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 500000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (512) < n_ctx_train (1280) -- the full capacity of the model will not be utilized
[ggml_backend_qnn_init_with_device_context, 327]: extend_lib_search_path is nullptr, will use /data/local/tmp// as default
[qnn_system_interface, 10]: initialize qnn system successfully

[qnn_init, 248]: device property is not supported
[qnn_init, 299]: create QNN device successfully
[ggml_backend_qnn_init_with_device_context, 379]: qnn device name qnn-gpu
[ggml_backend_qnn_init_with_device_context, 327]: extend_lib_search_path is nullptr, will use /data/local/tmp// as default
[qnn_system_interface, 10]: initialize qnn system successfully

[qnn_init, 258]: device counts 1
[qnn_init, 263]: deviceID:0, deviceType:0, numCores 1
[qnn_init, 268]: htp_type:0(ON_CHIP)
[qnn_init, 271]: qualcomm soc_model:69(unknown), htp_arch:79(unknown), vtcm_size:8 MB
[qnn_init, 297]: failed to create QNN device
[qnn_init, 346]: why failed to initialize qnn context
[ggml_backend_qnn_init_with_device_context, 369]: init qnn subsystem failed with qnn backend qnn-npu, pls check why
llama_new_context_with_model: failed to initialize qnn-npu backend
[ggml_backend_qnn_free, 208]: idx 1, name:qnn-gpu
common_init_from_params: failed to create context with model 'output_file_SR_3B_Q4_K_M.gguf'
main: error: unable to load model

Metadata

Metadata

Assignees

Labels

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions