Skip to content

[Web] onnxruntime-web wasm: InferenceSession.create fails with error code 19451592 for onnx model #26615

@kytttt

Description

@kytttt

Describe the issue

Describe the bug

When trying to load several ONNX models exported from PyTorch with opset 17
in the browser using onnxruntime-web with the WASM backend only
(no webgpu), InferenceSession.create fails with an internal numeric
error code (19451592) and no human-readable error message.

This happens for multiple models:

  • UNet video model
  • Text encoder model
  • VAE decoder model

All of these models:

  • Pass onnx.checker.check_model in Python.
  • Load and run successfully with the CPU ExecutionProvider in Python
    onnxruntime on the same machine.

But in the browser (WASM), InferenceSession.create fails for each of them
with the same numeric code.

I would like to know what this error code means, and whether there is a way
to get more detailed diagnostics or to understand which operator / feature is
not supported by the WASM backend.


System information

  • OS: Windows 10/11
  • Browser: (e.g. Chrome xx / Edge xx)
  • onnxruntime-web version: 1.23.2
  • ExecutionProvider (web): wasm
  • Models (all exported from PyTorch, torch.export / torch.onnx.export, opset=17):
    • UNet video
      • sample.shape: torch.Size([2, 4, 4, 32, 32]), dtype: float32
      • timestep.shape: torch.Size([]), dtype: int64
      • encoder_hidden_states.shape: torch.Size([8, 77, 768]), dtype: float32
      • extra kwargs: ['cross_attention_kwargs', 'added_cond_kwargs']
      • output sample.shape: torch.Size([2, 4, 4, 32, 32]), dtype: float32
    • Text encoder
      • (similar CLIP text encoder; loads and runs fine on Python CPU EP)
    • VAE decoder
      • (loads and runs fine on Python CPU EP)

For all of these models:

  • onnx.checker.check_model passes.
  • InferenceSession(model_path, providers=['CPUExecutionProvider']) works
    and inference runs correctly in Python.

Reproduction steps (web, simplified example for one model)

Here is a minimal JS snippet I use for all of these models (changing only
modelUrl):

import * as ort from 'onnxruntime-web';

async function main() {
  const status = document.getElementById('status');

  try {
    ort.env.logLevel = 'verbose';
    ort.env.wasm.wasmPaths =
      'https://cdn.jsdelivr.net/npm/[email protected]/dist/';
    ort.env.wasm.numThreads = 1;
    ort.env.wasm.simd = true;

    console.log('ORT env:', ort.env);

    const modelUrl = './onnx_models_merged/unet_video.onnx'; // or text_encoder.onnx / vae_decoder.onnx
    console.log('Creating session from', modelUrl);

    const session = await ort.InferenceSession.create(modelUrl, {
      executionProviders: ['wasm'],
      graphOptimizationLevel: 'all',
    });

    console.log('Session created OK');
    if (status) status.innerText = 'Session created OK';
  } catch (e) {
    console.error('ORT error in create():', e);
    if (status) {
      status.innerText =
        'Session create FAILED:\n' +
        (e && e.message ? e.message : String(e));
    }
  }
}

main();

### To reproduce

See below information.

### Urgency

_No response_

### ONNX Runtime Installation

Built from Source

### ONNX Runtime Version or Commit ID

1.23

### Execution Provider

'wasm'/'cpu' (WebAssembly CPU)

Metadata

Metadata

Assignees

No one assigned

    Labels

    .NETPull requests that update .net codeep:WebGPUort-web webgpu providerplatform:webissues related to ONNX Runtime web; typically submitted using template

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions