Skip to content

[Torch] Support FP16 Conversion #2429

@YifanShenSZ

Description

@YifanShenSZ

As of now, our torch converter

  1. Assumes the given torch model is in fp32 compute precision (i.e. weights and activations are all in fp32)
  2. Converts torch model as is (i.e. no treatments such as promoting types)
  3. Sandwichs ops with cast(fp16) -> op -> cast(fp32) then eliminate cancelling casts to obtain fp16 compute precision

This works in most cases: people usually can call torch_model.to(torch.float32) then invoke coremltools.convert. However, there are cases where developers request conversion support for fp16 or mixed fp16-fp32 torch models

  1. Remove state cast to fp32 #2423 fp32 torch model would be too big to fit in memory
  2. Skip casting model inputs to fp32 if weights and inputs are all fp16 #2274
  3. Avoid fp32 cast for Torch div operator #2241

Metadata

Metadata

Assignees

No one assigned

    Labels

    feature requestFunctionality does not currently exist, would need to be created as a new feature (type)

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions