Skip to content

Exported model assumes that the input should always be similar to the tracing example #1991

@hadiidbouk

Description

@hadiidbouk

🐞Describing the bug

The bug isn't detected while exporting the model, no error is shown, however, when I try using the model in Swift I got this error:

Thread 17: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "MultiArray shape (1 x 27200) does not match the shape (1 x 16000) specified in the model description" UserInfo={NSLocalizedDescription=MultiArray shape (1 x 27200) does not match the shape (1 x 16000) specified in the model description}

On this line:

let output = try! self.inferenceModule.prediction(input: input)

There is a problem in exporting somehow that makes the tracing not work as expected, it keeps assuming that my input is always the same as the one passed to the trace function.

The first thing to think of here is that the tracing is failing, but that's not the case because I am able to export the model using Pytorch lighting and use it with the LibTorch C++ library without any problem.

Stack Trace

When both 'convert_to' and 'minimum_deployment_target' not specified, 'convert_to' is set to "mlprogram" and 'minimum_deployment_targer' is set to ct.target.iOS15 (which is same as ct.target.macOS12). Note: the model will not run on systems older than iOS15/macOS12/watchOS8/tvOS15. In order to make your model run on older system, please set the 'minimum_deployment_target' to iOS14/iOS13. Details please see the link: https://coremltools.readme.io/docs/unified-conversion-api#target-conversion-formats
Support for converting Torch Script Models is experimental. If possible you should use a traced model for conversion.
Tuple detected at graph output. This will be flattened in the converted model.
Converting PyTorch Frontend ==> MIL Ops:   0%|                                                                                                      | 0/486 [00:00<?, ? ops/s]Saving value type of int64 into a builtin type of int32, might lose precision!
Saving value type of int64 into a builtin type of int32, might lose precision!
Saving value type of int64 into a builtin type of int32, might lose precision!
Converting PyTorch Frontend ==> MIL Ops:  71%|███████████████████████████████████████████████████████████████▉                          | 345/486 [00:00<00:00, 3449.42 ops/s]Saving value type of int64 into a builtin type of int32, might lose precision!
Saving value type of int64 into a builtin type of int32, might lose precision!
Converting PyTorch Frontend ==> MIL Ops: 100%|█████████████████████████████████████████████████████████████████████████████████████████▋| 484/486 [00:00<00:00, 3123.51 ops/s]
Running MIL frontend_pytorch pipeline:   0%|                                                                                                       | 0/5 [00:00<?, ? passes/s]Saving value type of int64 into a builtin type of int32, might lose precision!
Saving value type of int64 into a builtin type of int32, might lose precision!
Running MIL frontend_pytorch pipeline: 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 142.70 passes/s]
Running MIL default pipeline:   0%|                                                                                                               | 0/66 [00:00<?, ? passes/s]Saving value type of float64 into a builtin type of fp32, might lose precision!
Saving value type of float64 into a builtin type of fp32, might lose precision!
Running MIL default pipeline:   6%|██████▏                                                                                                | 4/66 [00:00<00:01, 39.63 passes/s] /python3.9/site-packages/coremltools/converters/mil/mil/passes/defs/preprocess.py:267: UserWarning: Output, 'input57.1', of the source model, has been renamed to 'input57_1' in the Core ML model.
      warnings.warn(msg.format(var.name, new_name))

Running MIL default pipeline: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████| 66/66 [00:03<00:00, 21.46 passes/s]
Running MIL backend_mlprogram pipeline: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:00<00:00, 393.06 passes/s]

custom_model = MyCustomModel()
custom_model.eval()

audio_signal = torch.randn(1, 16000)
audio_signal_len = torch.tensor([audio_signal.shape[1]])

scripted_model = torch.jit.trace(
    custom_model.forward, example_inputs=(audio_signal, audio_signal_len)
)

os.remove(exported_model_path)
exported_model_path = os.path.join(
    output_dir, "Model.ts"
)

scripted_model.save(exported_model_path)

torshscript_model = torch.jit.load(exported_model_path)

mlmodel = ct.convert(
    torshscript_model,
    source="pytorch",
    inputs=[
        ct.TensorType(name="input_signal", shape=audio_signal.shape),
        ct.TensorType(name="input_signal_length", shape=audio_signal_len.shape),
    ],
)
exported_model_path = os.path.join(output_dir, "Model.mlpackage")
mlmodel.save(exported_model_path)

System environment (please complete the following information):

  • coremltools version: 7.0.0
  • OS (e.g. MacOS version or Linux type): macOS 14.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    PyTorch (traced)questionResponse providing clarification needed. Will not be assigned to a release. (type)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions