Because of the discussion in part 1, I want to talk about:
- Requirements on the mapping
- Map does not need to be bijective
- Contractions -> useful for high dim cases, e.g. images?
- Experiment (Mischa)
- Decoder part is injective I.e. bijective on image
- Train encoder as VAE
- Train decoder as flow (fine-tune)
- Decoder part is injective I.e. bijective on image
- Convolutional flow networks in more detail
- How does Glow work? Is it bijective? Same I/O dimensions? (no)
- Piecewise invertible transformations for flow
- Dimensionality reduction flows