fix: Removing torchscript from autoconfig (backport #3307) #3332
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Simple pipeline hasn't been working for some time now, due to #3289 . I've tracked the issue down in transformers utils.py.
It appears that usage of torchscript, is changing type of output from invocations of
GenerationMixinobjects. I'm not sure about the exact reason why is this happening. But the behavior is normalized by simply removingtorchscriptparameter from config.Since purpose of Torchscript is to improve performance, I can guess that merging this fix may cause the pipeline to run a bit slower. But at least it will run.
Proper fix should be implemented in transformers, but I don't have enough experience with the code base to know what to look for.
I'm also reverting #3290 since things should be working now.
Checklist:
conventional commits.
This is an automatic backport of pull request #3307 done by [Mergify](https://mergify.com).