Thanks to visit codestin.com
Credit goes to github.com

Skip to content

3951384218/er-maml

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ER-MAML: Meta-Reinforcement Learning with Evolving Gradient Regularization

ER-MAML is an improved version of the Model-Agnostic Meta-Learning (MAML) algorithm, designed to enhance generalization in meta-reinforcement learning tasks. This repository provides the core implementation of ER-MAML, hyperparameter configurations, and auxiliary code based on the original MAML framework (https://github.com/dragen1860/MAML-Pytorch).


📦 Key hyperparameters in hyperparameters setting.py

Inner loop parameters Value
'inner_lr' 0.01
'max_path_length' 150
'adapt_steps' 3
'adapt_batch_size' 10
'ppo_epochs' 3
'ppo_clip_ratio' 0.2
Outer loop parameters Value
'meta_batch_size' 20
'outer_lr' 0.1
'backtrack_factor' 0.5
'ls_max_steps' 15
'max_kl' 0.01
Common parameters Value
'activation' 'relu'
'tau' 0.95
'gamma' 0.99
'fc_neurons' 100
Evo Value
'sigma' 0.001
'temp' 0.05
'n_model': 2
'evo_lr': 0.01
Grad norm Value
'norm_a' 0.1
'grad_rate' 0.001
Other parameters Value
'algo_name' 'ER-MAML'
'adapt_steps' 3 # Number of steps to adapt to a new task
'adapt_batch_size' 20 # Number of shots per task
'n_tasks' 10 # Number of different tasks to evaluate on

About

Meta-Reinforcement Learning with Evolving Gradient Regularization

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages