Skip to content

AssertionError: BF16 weight prepack needs the cpu support avx512bw, avx512vl and avx512dq, please set dtype to torch.float or set weights_prepack to False. #4

@mayowaosibodu

Description

@mayowaosibodu

Hello,

I recently came across this repo, and I think it's cool. I'm interested in running SadTalker inference at much faster speeds, and I'm curious about the speedups this repo provides.

I'm trying the inference on an Intel Xeon E5-2690 v4 (Broadwell) CPU, but I'm getting the above error.

Here's the general output after running the inference command:

using safetensor as default start to generate video... 1694444593.596219 device========= cpu ---------device----------- cpu 0000: Audio2Coeff 0.9642581939697266 Traceback (most recent call last): File "inference.py", line 217, in <module> main(args) File "inference.py", line 51, in main animate_from_coeff = AnimateFromCoeff(sadtalker_paths, device) File "/home/demo/xtalker/src/facerender/animate.py", line 78, in __init__ self.generator = ipex.optimize(self.generator, dtype=torch.bfloat16) File "/home/demo/.local/lib/python3.8/site-packages/intel_extension_for_pytorch/frontend.py", line 526, in optimize assert core.onednn_has_bf16_support(), \ AssertionError: BF16 weight prepack needs the cpu support avx512bw, avx512vl and avx512dq, please set dtype to torch.float or set weights_prepack to False.

Is the BF16 weight prepack important for the xtalker speedups?

If yes, does this mean xtalker can't run effectively on the Xeon E5-2690 v4 (Broadwell) CPU? What CPUs can it run on (in addition to the Xeon Sapphire Rapids CPU)?

I'm running xtalker on Azure VMs.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions