This repository is dedicated to training and experimenting with glow model described in Glow: Generative Flow with Invertible 1x1 Convolutions
The model is adapted from rosinality and is written in GLOW.py, trained on CelebA dataset. There are some random generated examples with different temperatures for sampling:
The main experiment is described in Why Normalizing Flows Fail to Detect Out-of-Distribution Data and can be found in main_exp.ipynb. For OOD pictures SVHN dataset is used.
Here you can see the distributions of LogLikelihoods:
We can kinda proove authors' findings:
We argue that flows are biased towards learning graphical properties of the data such as local pixel correlations (e.g. nearby pixels usually have similar colors) rather than semantic properties of the data (e.g. what objects are shown in the image).
There are pictures with the highest LogLikelihood in OOD data and with least LogLikelihoods in train/test sets compared:
And there are the reversed (lowest for OOD data and highest for train/test sets):
It is not hard to see that blurry images from Out-of-Domain with "usually similar colors for nearby pixels" have the highest LogLikelihoods even though there are no faces at all! On the other hand, faces with contrast backgrounds of a lot of different colors score the lowest by the model.
Thx mseitzer for implementing Inception.py for FID calculation.
Thx WanbB for convenient logging and beautiful report writing tool.