Skip to content

[Bug]: Supported precisions set is not available for Convert operation. #27764

@starlitsky2010

Description

@starlitsky2010

OpenVINO Version

Master

Operating System

Android System

Device used for inference

CPU

Framework

ONNX

Model used

mobelinet-v3-tf

Issue description

LD_LIBRARY_PATH=/data/local/tmp ./data/local/tmp/benchmark_app -d CPU -m /data/local/tmp/mobelinet-v3-tf/v3-small_224_1.0_float.xml -hint throughput

Step-by-step reproduction

According to https://github.com/openvinotoolkit/openvino/blob/master/docs/dev/build_android.md I built the android ONNX with ABI x86_64 and enable the benchmark_app compilation with latest version OpenVINO master baseline (commit ID 287ab98)

But when I try mobelinet-v3-tf example mentioned in the official document above, the error occurs.
Could OpenVINO guys give some tips about it?

屏幕截图 2024-11-27 102841

Relevant log output

[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading OpenVINO Runtime
[ INFO ] OpenVINO:
[ INFO ] Build ................................. 2025.0.0-17426-287ab9883ac
[ INFO ]
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] Build ................................. 2025.0.0-17426-287ab9883ac
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[Step 4/11] Reading model files
[ INFO ] Loading model files
[ INFO ] Read model took 33.87 ms
[ INFO ] Original model I/O parameters:
[ INFO ] Network inputs:
[ INFO ]     input:0 (node: input) : f32 / [...] / [1,224,224,3]
[ INFO ] Network outputs:
[ INFO ]     MobilenetV3/Predictions/Softmax:0 (node: MobilenetV3/Predictions/Softmax) : f32 / [...] / [1,1001]
[Step 5/11] Resizing model to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Model batch size: 1
[ INFO ] Network inputs:
[ INFO ]     input:0 (node: input) : u8 / [N,H,W,C] / [1,224,224,3]
[ INFO ] Network outputs:
[ INFO ]     MobilenetV3/Predictions/Softmax:0 (node: MobilenetV3/Predictions/Softmax) : f32 / [...] / [1,1001]
[Step 7/11] Loading the model to the device
[ ERROR ] Exception from src/inference/src/cpp/core.cpp:107:
Exception from src/inference/src/dev/plugin.cpp:53:
Check 'jitter != jitters.end()' failed at src/common/snippets/src/lowered/target_machine.cpp:19:
Supported precisions set is not available for Convert operation.


### Issue submission checklist

- [X] I'm reporting an issue. It's not a question.
- [X] I checked the problem with the documentation, FAQ, open issues, Stack Overflow, etc., and have not found a solution.
- [X] There is reproducer code and related data files such as images, videos, models, etc.

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions