Soft-dtw looks like the perfect solution for my deep-learning model. However the speed is a major bottleneck in training (minibatches of 64 samples, w. 2000 positions x 25 classes).
Would it be possible to add a parameter for greedy scoring which would scale better in time?
For example, I never need alignments with more than a few insertions/deletions. Perhaps this can be achieved by controlling the maximum recursion depth?
Soft-dtw looks like the perfect solution for my deep-learning model. However the speed is a major bottleneck in training (minibatches of 64 samples, w. 2000 positions x 25 classes).
Would it be possible to add a parameter for greedy scoring which would scale better in time?
For example, I never need alignments with more than a few insertions/deletions. Perhaps this can be achieved by controlling the maximum recursion depth?