Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[dynamic shapes] guard_or_false for infer_size #152146

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 9 commits into from

Conversation

pianpwk
Copy link
Contributor

@pianpwk pianpwk commented Apr 24, 2025

Copy link

pytorch-bot bot commented Apr 24, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/152146

Note: Links to docs will display an error until the docs builds have been completed.

✅ You can merge normally! (2 Unrelated Failures)

As of commit b44c787 with merge base 0e2b948 (image):

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pianpwk pianpwk changed the title [WIP][dynamic shapes] guard_or_false for infer_size [dynamic shapes] guard_or_false for infer_size Apr 26, 2025
@pianpwk pianpwk marked this pull request as ready for review April 26, 2025 07:39
@@ -393,6 +394,7 @@ def f(x):

self.assertEqual(counter.frame_count, 2) # not three or four!

@expectedFailure # TODO(laithsakka, pianpwk): handle guard_or_false before oblivious hint fallback
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why does this fail now?

guard_size_oblivious(sizeA == 1)
or guard_size_oblivious(sizeB == 1)
or sizeA == sizeB,
guard_or_false(sizeA == 1) or guard_or_false(sizeB == 1) or sizeA == sizeB,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can land it as for the future i might revisit this to make the code more understandable

basically first we want to check if there is broadcasting using guard_or_none.
if any of them did not return none we are done.
if both return non i would want to add an explicit extra message to the torch_check that says we have assumed this path because both sizeA == sizeB are unbacked

not action required from you at this momment. i will file issue for this.

@pianpwk
Copy link
Contributor Author

pianpwk commented May 6, 2025

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Rebase failed due to Command git -C /home/runner/work/pytorch/pytorch rebase refs/remotes/origin/viable/strict pull/152146/head returned non-zero exit code 1

Rebasing (1/56)
Auto-merging test/export/test_export.py
CONFLICT (content): Merge conflict in test/export/test_export.py
Auto-merging torch/_subclasses/fake_impls.py
error: could not apply 7d6c7bbda53... infer_size
hint: Resolve all conflicts manually, mark them as resolved with
hint: "git add/rm <conflicted_files>", then run "git rebase --continue".
hint: You can instead skip this commit: run "git rebase --skip".
hint: To abort and get back to the state before "git rebase", run "git rebase --abort".
hint: Disable this message with "git config set advice.mergeConflict false"
Could not apply 7d6c7bbda53... infer_size

Raised by https://github.com/pytorch/pytorch/actions/runs/14851267873

@@ -2,7 +2,7 @@ add_loop_eager,compile_time_instruction_count,2960000000,0.015



add_loop_eager_dynamic,compile_time_instruction_count,5633000000,0.025
add_loop_eager_dynamic,compile_time_instruction_count,5806000000,0.025
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

due to guard_or_false additional overhead, profiling shows infer_size goes from 14% -> 18% total compile time

@pianpwk
Copy link
Contributor Author

pianpwk commented May 8, 2025

@pytorchmergebot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label May 8, 2025
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants