-
Notifications
You must be signed in to change notification settings - Fork 0
Description
I think I found something interesting regarding the implementation/testing of cpu/gpu support: apparently PyTorch recently introduced a "meta" device that can be used exactly for testing these kind of purposes (https://docs.pytorch.org/docs/stable/meta.html). If I understood correctly, it allows you to move tensors to a "meta" device which is just some hypothetical imaginary device that stores no data. It also allows you create "meta" computations without actual data and can be used to verify whether the device switching happened properly. If the model attempts to make a computation using both a "meta" and cpu/gpu tensor it crashes. So I think if a model can successfully be transferred from cpu to a meta device and ran there it guarantees compatibility with gpu. (if its available of course). And this test can be run without the use of a gpu! Anyways curious to hear your thoughts!