TransformerDecoder forward_one_step with memory_mask#5679
Merged
mergify[bot] merged 7 commits intoespnet:masterfrom Feb 26, 2024
Merged
TransformerDecoder forward_one_step with memory_mask#5679mergify[bot] merged 7 commits intoespnet:masterfrom
mergify[bot] merged 7 commits intoespnet:masterfrom
Conversation
for more information, see https://pre-commit.ci
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #5679 +/- ##
==========================================
+ Coverage 70.27% 76.12% +5.84%
==========================================
Files 746 747 +1
Lines 69369 69409 +40
==========================================
+ Hits 48752 52836 +4084
+ Misses 20617 16573 -4044
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Contributor
Collaborator
|
Looks good to me! Since we do not implement batched inference right now, it should not affect the current behavior. I also found this issue when preparing the assignment for our speech course. In that assignment, I did a similar thing to add the mask so that we can use batched inference. |
Contributor
|
OK, thanks! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What?
I added the
memory_maskarg toBaseTransformerDecoder.forward_one_step.Alongside, I made the
cachearg a keyword-only arg, such that the user can not rely on the specific order of positional args. I think that's cleaner.Why?
I needed the
memory_maskarg forTransformerDecoder.forward_one_stepbecause I wanted to perform batched beam search (i.e. on multiple sequences at once, i.e. the batch dim inmemoryis >1).