Thanks to visit codestin.com
Credit goes to github.com

Skip to content

RonaldTao/AAMAS_code

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Optimizing Crop Management with Reinforcement Learning and Imitation Learning

Accepted by AAMAS: The paper is avaiable @ ArXiv

1. Dependencies

Follow https://rgautron.gitlabpages.inria.fr/gym-dssat-docs/Installation/index.html to install the required GYM-DSSAT. It's recommended to follow 'With pacakges' and GYM-DSSAT is currently only available for Linux machines.

2. Training

2.1 Baseline

Run

Baseline.ipynb

with custom weights to get the performance of the baseline method for policy comparison.

2.2 RL-based Training

For RL-based policy training, the code can be found the folder of 'RL-based Training'.

Run

Full_observation.ipynb

to train policies under full observation with custom weights using RL.

Run

Partial_onserbation.ipynb

to train policy under partial observtion with custom weights using RL.

2.3 IL-based Training

For IL-based policy training, the code can be found the folder of 'IL-based Training'.

Given a saved RL-trained policy (expert) under full observation, run

Generate_data_set.ipynb

to obtain observation-action pairs for IL training.

Once we obtain the dataset, run

IL_based_training.ipynb

to train policy under partial observbation using IL.

3. Evaluation

Given any trained policy, we can use the corresponding file from the folder of 'Policy Evalution' to evalute its performance.

Run

IL_trained_partial.ipynb

to evalute IL-trained policies under partial observation.

Run

RL_trained_full.ipynb 

to evalute RL-trained policies under full observation.

Run

RL_trained_partial.ipynb

to evalute RL-trained policies under partial observation.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •