Open
Description
Issue summary
orthogonal latent space ==> reproducible training?
Details
If it is really true that the latent space dimensions also end up tending towards being orthogonal to each other, as Rory mentioned Friday, then it should also be the case that retraining a model with the same data set permuted into a different order should give a pretty congruent latent space. I am explicitly not asking for any more retraining associated with this manuscript but I do think this is something we should definitely explore moving forward, ideally with the simplified 2D reference-only data set used in Fig. 3.
TODO
- manuscript text