Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@xt0r3-Cambridge
Copy link

Implemented support for different upstream learning rates as raised in a feature request.

Use instructions:
Simply add another upstream_lr parameter to the optimiser settings, like this:

optimizer:
  lr: 3.e-4
  upstream_lr: 1.e-5
  [other settings]

The upstream training only happens if the code is run with the --upstream-trainable (or -f) flag.

@ccaven
Copy link
Contributor

ccaven commented Jun 27, 2024

Hi @xt0r3-Cambridge - just wanted to say thanks for implementing this. I recently used it when fine-tuning, since using such a high LR generally wipes all the pretraining knowledge. Hopefully the s3prl maintainers will take a look at this eventually.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants