Speeding up training times for SNPE. #1546
Unanswered
paarth-dudani
asked this question in
Q&A
Replies: 1 comment 4 replies
-
|
Any thoughts or ideas anyone? |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello community!
So I have been running SNPE on very high dimensional models and have been getting successful results like this:
Red shows MCMC ground truth and blue is the SNPE estimate using MADE.
While this is all good, I want to now address the problem of speeding up the inference. In particular, my aim is to infer the joint posterior using SNPE faster than that through MCMC (currently MCMC is faster).
Can someone suggest ways in which the neural network training and convergence can sped up as a part of the sbi pipline?
I use the following function to train to wrap up the sbi training pipeline:
For reference, the estimate shown above took ~6.5 hours to train (2 rounds, 35k simulations) on a high performance cluster with 100CPUs and and 100GB of memory. This currently precludes the accessibility of these algorithms.
Beta Was this translation helpful? Give feedback.
All reactions