Skip to content

Results: reconstruction vs. compactness -- KL vs. dim plot #68

Open
@donovanr

Description

@donovanr

Issue summary

as beta changes, plot KL vs sorted(dims) and show what happens (maybe more peaked?)

Details

Another feature of the reconstruction vs. compactness trade-off explored by tuning beta should, I think, be reflected in the amount of variance encapsulated in each of the (ordered) latent space dimensions. At some point I saw a plot of variance vs. dimension that showed a pretty strong fall-off over the first few dimensions, flattening down to some kind of noise baseline at about 40. I think that as beta is increased, the falloffs in these curves should get steeper and steeper, as the model is shoving more information into a more sparse latent space. I think it would be good to include these curves in the figure, or at the very least in the supplement, and comment on the utility of tuning beta for the purposes of dimensionality reduction.

TODO

  • code
  • manuscript text
  • figure

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions