-
Notifications
You must be signed in to change notification settings - Fork 25.5k
[dynamo][super variable] Fix bug to use correct source #151154
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Will add test in a follow up PR, because this PR might need to be cherry-picked to 2.7. So keeping the PR very simple to cherry-pick [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/151154
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit bf0eb70 with merge base 2653498 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
Asked for review too early. CI failures say that something is really wrong. Converting to draft. |
Will add test in a follow up PR, because this PR might need to be cherry-picked to 2.7. So keeping the PR very simple to cherry-pick Fixes #150994 cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
Will add test in a follow up PR, because this PR might need to be cherry-picked to 2.7. So keeping the PR very simple to cherry-pick Fixes #150994 cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
Will add test in a follow up PR, because this PR might need to be cherry-picked to 2.7. So keeping the PR very simple to cherry-pick Fixes #150994 cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
Fixes #150994 We should cherry-pick to 2.7 branch if possible, because this breaks torch.compile on some HF models. Look at the issue referenced here. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
|
||
|
||
LayoutLMForSequenceClassification,pass,5 | ||
LayoutLMForSequenceClassification,pass,6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixes #150994 We should cherry-pick to 2.7 branch if possible, because this breaks torch.compile on some HF models. Look at the issue referenced here. cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames [ghstack-poisoned]
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
@pytorchbot cherry-pick --onto release/2.7 -c critical |
Cherry picking #151154Command
Details for Dev Infra teamRaised by workflow job |
Hi @anijain2305 Thank you a lot for the fix. If I want to try the fix, what would the best approach? Like a wheel for torch 2.7 RC with this fix included? Or checkout to branch/tag |
Fixes pytorch#150994 We should cherry-pick to 2.7 branch if possible, because this breaks torch.compile on some HF models. Look at the issue referenced here. Pull Request resolved: pytorch#151154 Approved by: https://github.com/jansel
Fixes pytorch#150994 We should cherry-pick to 2.7 branch if possible, because this breaks torch.compile on some HF models. Look at the issue referenced here. Pull Request resolved: pytorch#151154 Approved by: https://github.com/jansel
👋 I have the same question as @ydshieh -- what's the best way to try the fix? We noticed that |
adding to 2.7.1 |
Hi, share an observation I found. Today I tried to update our docker images to use torch 2.7(+cpu), and find we find the same issues happens for 6 tests, see this run For example: running python -m pytest -v tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_generate_compilation_all_outputs gives FAILED tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_generate_compilation_all_outputs
- torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder transformers.models.gemma3.configuration_gemma3.Gemma3TextConfig
from user code:
File "/usr/local/lib/python3.9/site-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 1320, in forward
causal_mask = self._update_causal_mask(
File "/usr/local/lib/python3.9/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 1115, in _update_causal_mask
if self.config.text_config._attn_implementation == "flash_attention_2":
File "/usr/local/lib/python3.9/site-packages/transformers/configuration_utils.py", line 211, in __getattribute__
return super().__getattribute__(key)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
FAILED tests/models/gemma3/test_modeling_gemma3.py::Gemma3Vision2TextModelTest::test_generate_compile_model_forward - torch._dynamo.exc.Unsupported: Unexpected type in sourceless builder transformers.models.gemma3.configuration_gemma3.Gemma3TextConfig
from user code:
File "/usr/local/lib/python3.9/site-packages/transformers/utils/generic.py", line 969, in wrapper
output = func(self, *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 1320, in forward
causal_mask = self._update_causal_mask(
File "/usr/local/lib/python3.9/site-packages/transformers/models/gemma3/modeling_gemma3.py", line 1115, in _update_causal_mask
if self.config.text_config._attn_implementation == "flash_attention_2":
File "/usr/local/lib/python3.9/site-packages/transformers/configuration_utils.py", line 211, in __getattribute__
return super().__getattribute__(key)
Set TORCHDYNAMO_VERBOSE=1 for the internal stack trace (please do this especially if you're reporting a bug to PyTorch). For even more developer context, set TORCH_LOGS="+dynamo"
===== 2 failed, 1267 passed, 406 skipped, 46 warnings in 143.60s (0:02:23) ===== |
@ydshieh @gante We are cherry-picking this PR (along with the other cudagraph bug PR) for 2.7.1 - #152774 I have tested above test passes with the PR built on top of 2.7 (and I verified that if failed on 2.7). Once the PR is merged to a release branch, it would be really helpful to do some extra testing from your side. |
Stack from ghstack (oldest at bottom):
Fixes #150994
We should cherry-pick to 2.7 branch if possible, because this breaks torch.compile on some HF models. Look at the issue referenced here.
cc @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @kadeng @chauhang @amjames