Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Greedy speedup #7

@ghannum

Description

@ghannum

Soft-dtw looks like the perfect solution for my deep-learning model. However the speed is a major bottleneck in training (minibatches of 64 samples, w. 2000 positions x 25 classes).

Would it be possible to add a parameter for greedy scoring which would scale better in time?

For example, I never need alignments with more than a few insertions/deletions. Perhaps this can be achieved by controlling the maximum recursion depth?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions