ER-MAML is an improved version of the Model-Agnostic Meta-Learning (MAML) algorithm, designed to enhance generalization in meta-reinforcement learning tasks. This repository provides the core implementation of ER-MAML, hyperparameter configurations, and auxiliary code based on the original MAML framework (https://github.com/dragen1860/MAML-Pytorch).
| Inner loop parameters | Value | |
|---|---|---|
| 'inner_lr' | 0.01 | |
| 'max_path_length' | 150 | |
| 'adapt_steps' | 3 | |
| 'adapt_batch_size' | 10 | |
| 'ppo_epochs' | 3 | |
| 'ppo_clip_ratio' | 0.2 | |
| Outer loop parameters | Value | |
| 'meta_batch_size' | 20 | |
| 'outer_lr' | 0.1 | |
| 'backtrack_factor' | 0.5 | |
| 'ls_max_steps' | 15 | |
| 'max_kl' | 0.01 | |
| Common parameters | Value | |
| 'activation' | 'relu' | |
| 'tau' | 0.95 | |
| 'gamma' | 0.99 | |
| 'fc_neurons' | 100 | |
| Evo | Value | |
| 'sigma' | 0.001 | |
| 'temp' | 0.05 | |
| 'n_model': | 2 | |
| 'evo_lr': | 0.01 | |
| Grad norm | Value | |
| 'norm_a' | 0.1 | |
| 'grad_rate' | 0.001 | |
| Other parameters | Value | |
| 'algo_name' | 'ER-MAML' | |
| 'adapt_steps' | 3 | # Number of steps to adapt to a new task |
| 'adapt_batch_size' | 20 | # Number of shots per task |
| 'n_tasks' | 10 | # Number of different tasks to evaluate on |