Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Expand Tag Set: views & reductions #129020

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weโ€™ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
eellison opened this issue Jun 19, 2024 · 9 comments
Open

Expand Tag Set: views & reductions #129020

eellison opened this issue Jun 19, 2024 · 9 comments
Labels
good first issue module: internals Related to internal abstractions in c10 and ATen triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@eellison
Copy link
Contributor

eellison commented Jun 19, 2024

๐Ÿš€ The feature, motivation and pitch

pointwise tag has found use in multiple places. It would also be useful to have reduction and view tags. Especially the latter is often re-created adhoc.

cc @ezyang @bhosmer @smessmer @ljk53 @bdhirsh @SherlockNoMad

Alternatives

No response

Additional context

No response

@Robertwahl
Copy link

I'm interested in working on this feature

@albanD albanD added module: internals Related to internal abstractions in c10 and ATen triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jun 19, 2024
@eellison
Copy link
Contributor Author

@Robertwahl, this pr is a good template: #90029 with the reductions specified in ReduceOps.cpp and views in TensorShape.cpp

@JitheshPavan
Copy link

@eellison, I would like to work on this issue. This will be my first issue. So, I would be grateful if you could give me some guidance on how I should approach this problem.

@JoyChopra1298
Copy link

@eellison is this issue still available?

@eellison
Copy link
Contributor Author

Yes, still available.

@AlonSardas
Copy link

AlonSardas commented Apr 29, 2025

It seems that nobody is doing it, so I would like to start working on that.

From what I understand, we would like to do the following:

  1. Add view and reduction tags to aten/src/ATen/native/tags.yaml
  2. Go through all the declarations in aten/src/ATen/native/native_functions.yaml and add the relevant tags as follows:
    (a) reduction tag should be added to all operators that accept a reduction argument
    (b) view tag should be added to the operators where none of the arguments are writable. I assume this tag should be consistent with the OpOverload.is_view property.
  3. We could add tests to verify this coverage, by iterating over all operators in torch.ops.aten.

Do you agree with that? Is there anything else we should do?

@eellison
Copy link
Contributor Author

@AlonSardas that sounds close !

(a) reduction tag should be added to all operators that accept a reduction argument

Lets just add the base reductions, not kernels which are parameterized by what type of reduction. See my comment above about the ops in ReduceOps.cpp

@AlonSardas
Copy link

I see, and it makes more sense. I still think there are a few finer points worth considering when categorizing operators:

  • On reduction: An intuitive definition might be: reduction operations aggregate tensor elements along one or more dimensions, producing an output with fewer dimensions.
    • Many operators in ReduceOps.cpp fit this (e.g. sum, prod, argmax, mean), but cumsum and cumprod don't reduce dimensionality. My assumption was to exclude them. Does that sound reasonable?
    • min and max match the definition but are implemented in TensorCompare.cpp. Should they be tagged as well?
  • On view: The list in https://pytorch.org/docs/stable/tensor_view.html seems like a good reference
    • And yet, some operators like reshape, reshape_as and flatten (from TensorShape.cpp) may clone the input and likely shouldn't be tagged as views, agreed?
    • Also, some ops have OpOverload.is_view == True but donโ€™t behave like views (e.g. resolve_neg, resolve_conj, pin_memory). I assume we should exclude those?

If you agree with the points above, Iโ€™ll go ahead and open a pull request.

@eellison
Copy link
Contributor Author

eellison commented May 6, 2025

My assumption was to exclude them. Does that sound reasonable

Yes, cumsum and cumprod are considered scans.

And yet, some operators like reshape, reshape_as and flatten

Yea, we are only going to target ops with definitive aliasing reliationships that are CompositeExplicitAutograd as opposed to CompositeImplicitAutograd. See: https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/README.md

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue module: internals Related to internal abstractions in c10 and ATen triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

6 participants