Accepted by AAMAS: The paper is avaiable @ ArXiv
Follow https://rgautron.gitlabpages.inria.fr/gym-dssat-docs/Installation/index.html to install the required GYM-DSSAT. It's recommended to follow 'With pacakges' and GYM-DSSAT is currently only available for Linux machines.
Run
Baseline.ipynbwith custom weights to get the performance of the baseline method for policy comparison.
For RL-based policy training, the code can be found the folder of 'RL-based Training'.
Run
Full_observation.ipynbto train policies under full observation with custom weights using RL.
Run
Partial_onserbation.ipynbto train policy under partial observtion with custom weights using RL.
For IL-based policy training, the code can be found the folder of 'IL-based Training'.
Given a saved RL-trained policy (expert) under full observation, run
Generate_data_set.ipynbto obtain observation-action pairs for IL training.
Once we obtain the dataset, run
IL_based_training.ipynbto train policy under partial observbation using IL.
Given any trained policy, we can use the corresponding file from the folder of 'Policy Evalution' to evalute its performance.
Run
IL_trained_partial.ipynbto evalute IL-trained policies under partial observation.
Run
RL_trained_full.ipynb to evalute RL-trained policies under full observation.
Run
RL_trained_partial.ipynbto evalute RL-trained policies under partial observation.