Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@ChrisZhangcx
Copy link

Freeze model performance in multiple training process.
The simplest way is to freeze one-hot encoding for each class every time we re-run the train.py.
By applying this, we can freeze the model performance to 0.8250 with loss 0.7380.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant