Instructions for running on New York University's Prince computer cluster.
-
Clone repository
git clone https://github.com/wh629/c-bert.git -
Perform the following commands
module purgemodule load anaconda3/5.3.1module load cuda/10.0.130module load gcc/6.3.0 -
In cloned repository, create anaconda environment
cbertfromenvironment.ymlconda env create -f environment.yml -
In repository, setup directories for
a.
datab.
logc.
results(For cached data, place inresults/cached_data/<model-name>/) -
Load data into
data -
For faster runs, load cached data into
results/cached_data/bert-base-uncased/folder -
Load meta weights into
results/meta_weights/
-
Train on SQuAD either using frozen embeddings or fine-tuning
a. Fill out
PROJECT=<Repository Directory>in desired.sbatchfile-
For frozen, use
sbatch baseline_SQuAD_frozen.sbatch -
For fine-tuning, use
sbatch baseline_SQuAD_finetune.sbatch
-
-
Outputs will be found in
resultsin the following sub-directoriesa.
cached_data- cached data as.ptfilesb.
logged/<model-name>/<task-name>- Model state dictionaries as.ptfiles -
Monitor run using
log/baseline_SQuAD_<frozen/finetune>_run_log_<date>_<time>.log
-
Train on TriviaQA either using frozen embeddings or fine-tuning and Evaluate Continual Learning
a. Fill out
PROJECT=<Repository Directory>in desired.sbatchfile-
For frozen, use
sbatch baseline_TriviaQA_ContinualLearning_frozen.sbatch -
For fine-tuning, use
sbatch baseline_TriviaQA_ContinualLearning_finetune.sbatch
-
-
Outputs will be found in
resultsin the following sub-directoriesa.
cached_data- cached data as.ptfilesb.
json_results- F1 scores for plotting in.jsonfilesc.
logged/<model-name>/<task-name>- Model state dictionaries as.ptfilesd.
plots- Plots of results in.pngfiles -
Monitor run using
log/baseline_TriviaQA_ContinualLearning_<frozen/finetune>_run_log_<date>_<time>.log
-
Perform meta-learning with
sbatch Meta.sbatch -
Meta-learned weights can be found in
results/meta_weights/meta_meta_weights.pt -
Monitor run using
log/meta_meta_run_log_<date>_<time>.log
-
Train on SQuAD either using frozen embeddings or fine-tuning
a. Fill out
PROJECT=<Repository Directory>in desired.sbatchfile-
For frozen, use
sbatch cBERT_SQuAD_frozen.sbatch -
For fine-tuning, use
sbatch cBERT_SQuAD_finetune.sbatch
-
-
Outputs will be found in
resultsin the following sub-directoriesa.
cached_data- cached data as.ptfilesb.
logged/<model-name>/<task-name>- Model state dictionaries as.ptfiles -
Monitor run using
log/cbert_SQuAD_<frozen/finetune>_run_log_<date>_<time>.log
-
Train on TriviaQA either using frozen embeddings or fine-tuning and Evaluate Continual Learning
a. Fill out
PROJECT=<Repository Directory>in desired.sbatchfile-
For frozen, use
sbatch cBERT_TriviaQA_ContinualLearning_frozen.sbatch -
For fine-tuning, use
sbatch cBERT_TriviaQA_ContinualLearning_finetune.sbatch
-
-
Outputs will be found in
resultsin the following sub-directoriesa.
cached_data- cached data as.ptfilesb.
json_results- F1 scores for plotting in.jsonfilesc.
logged/<model-name>/<task-name>- Model state dictionaries as.ptfilesd.
plots- Plots of results in.pngfiles -
Monitor run using
log/cbert_TriviaQA_ContinualLearning_<frozen/finetune>_run_log_<date>_<time>.log