Pareto frontier literally off the charts
Update:
Wow this blew up. Pressure is on.
Please bear with me as I want to do careful ablations.
- upload the
run-script.ipynbfile to google colab or modal- (optional) if you want to save results, mount your drive/volume. If you don't, then comment out the 2 cells that save save a checkpoint to drive.
- choose A100
- Hit run all
Every DL approach on ARC today trains a supervised algorithm (other than compressARC)
I think this is suboptimal.
A self-supervised compression step will obviously perform better:
- There is new information in the input grids and private puzzles that is currently uncompressed
- Test grids have distribution shifts. Compression will push these grids into distribution
Implementation details: New pareto frontier on ARC-AGI For why I chose these specific implementations, read my blog on Why all ARC solvers fail today
Performance - 27.5% on ARC-1 public eval Total Compute cost - $1.8
- ~127min on 40GB A100 for training (1.2$)
- ~49min on 80GB A100 for inference (0.6$)
This is early performance. I was too GPU poor to do hyperparameter sweeps.
I should be able to push to 35% with just basic sweeps
I expect to hit 50% with a few obvious research ideas