Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@AnirudhDagar
Copy link
Member

Description of changes:

By submitting this pull request, I confirm that you can use, modify,
copy, and redistribute this contribution, under the terms of your
choice.

@mli
Copy link
Member

mli commented Aug 7, 2020

Job d2l-en/PR-1305/2 is complete.
Check the results at http://preview.d2l.ai/d2l-en/PR-1305/

@mli
Copy link
Member

mli commented Aug 7, 2020

Job d2l-en/PR-1305/1 is complete.
Check the results at http://preview.d2l.ai/d2l-en/PR-1305/

@mli
Copy link
Member

mli commented Aug 7, 2020

Job d2l-en/PR-1305/5 is complete.
Check the results at http://preview.d2l.ai/d2l-en/PR-1305/

@mli
Copy link
Member

mli commented Aug 9, 2020

Job d2l-en/PR-1305/6 is complete.
Check the results at http://preview.d2l.ai/d2l-en/PR-1305/

@mli
Copy link
Member

mli commented Aug 9, 2020

Job d2l-en/PR-1305/7 is complete.
Check the results at http://preview.d2l.ai/d2l-en/PR-1305/

@AnirudhDagar AnirudhDagar changed the title [WIP][PyTorch] Add lr scheduler [PyTorch] Add lr scheduler Aug 9, 2020
@mli
Copy link
Member

mli commented Aug 10, 2020

Job d2l-en/PR-1305/8 is complete.
Check the results at http://preview.d2l.ai/d2l-en/PR-1305/


```{.python .input}
scheduler = lr_scheduler.CosineScheduler(20, warmup_steps=5, base_lr=0.5,
scheduler = lr_scheduler.CosineScheduler(20, warmup_steps=5, base_lr=0.3,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we change base_lr?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PyTorch implementation was not converging regularly somehow when base_lr was 0.5. So I changed it across frameworks for consistency.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's ok. Just give me a heads up if you change original mx code :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure! I mentioned it here. Will make it more explicit in the future if mxnet code is changed.

#@tab tensorflow
scheduler = SquareRootScheduler(1.0)
#@tab all
scheduler = SquareRootScheduler(lr=0.1)
Copy link
Member

@astonzhang astonzhang Aug 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here's another example of original code modification. Please make sure that similar results are obtained.

@AnirudhDagar
Copy link
Member Author

Should we merge this now?

@astonzhang astonzhang merged commit faf0bb3 into d2l-ai:master Aug 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants