-
Notifications
You must be signed in to change notification settings - Fork 515
Description
Feature request
Is your feature request related to a problem? Please describe.
I am using this toolkit to run several experiments where the upstream is frozen, by modifying the downstream each time. I find that the training is quite slow for some upstream models. In terms of computational complexity, the downstream models are significantly lighter than the upstream. If there is a way to save and load the upstream outputs for each file, it will make the training significantly faster.
Describe the solution you'd like
Adding this feature is usually straightforward if there is only one kind of feature to store (For example, the last hidden layer). We just need to store the output of the featurizer in the first epoch to a cache path for each audio file in the dataset, and load them in the subsequent epochs, without having to compute them on the fly. But when we want the featurizer to be trainable (when we are using the weighted sum approach), we will need to store all the hidden layers of the upstream, and handling all that is quite complicated, because the output of upstream is a dictionary, whose size varies with batch size, and entries vary with the upstream model. This can be handled in the data loader itself, or separately inside the training loop.
I honestly feel it is worth spending some time adding this feature, as it will save a lot of compute resources and time.