Thanks to visit codestin.com
Credit goes to github.com

Skip to content

NaN during training when using own dataset #4

@cjay42

Description

@cjay42

While fine-tuning works as expected, doing regular training with a dataset that isn't LJSpeech would eventually cause a NaN loss at some point.
The culprit appears to be the following line, which causes a division by zero if wav happens to contain perfect silence:

wav = flip * gain * wav / wav.abs().max()

I'm not sure what the best solution for this would be, as a quick fix I simply clipped the divisor so it can't reach zero:

wav = flip * gain * wav / max([wav.abs().max(), 0.001])

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions