-
Notifications
You must be signed in to change notification settings - Fork 24.1k
Do not cover up __dunder
__ method type-hints from .pyi
file
#150875
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/150875
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ You can merge normally! (2 Unrelated Failures)As of commit 39f0fd1 with merge base f47bf38 ( BROKEN TRUNK - The following job failed but were present on the merge base:👉 Rebase onto the `viable/strict` branch to avoid these failures
UNSTABLE - The following job is marked as unstable, possibly due to flakiness on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
torch/_tensor.py
Outdated
__pos__ = _C.TensorBase.positive | ||
__neg__ = _C.TensorBase.neg | ||
__abs__ = _C.TensorBase.abs | ||
|
||
@_handle_torch_function_and_wrap_type_error_to_not_implemented | ||
def __floordiv__(self, other): | ||
return torch.floor_divide(self, other) | ||
# The typehints for these dunder methods are auto-generated as part of | ||
# _C.TensorBase's typestubs, so use those. | ||
if not TYPE_CHECKING: | ||
|
||
@_handle_torch_function_and_wrap_type_error_to_not_implemented | ||
def __rfloordiv__(self, other): | ||
return torch.floor_divide(other, self) | ||
@_handle_torch_function_and_wrap_type_error_to_not_implemented |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't quite understand why some things need to be inside the TYPE_CHECKING
block and what thigns don't need to be (and I'm not entirely sure how these tests the torch/_C/__init__.pyi
file is actually generated from the __init__.pyi.in
file), but this combination seems to make hte tests pass.
Hm... it looks like there are a bunch of mypy failures because more things are resolving to I'd be interested to know what the guidance here should be -- should I insert |
Hello, and thanks for doing this! It's a great idea, and will fix most of #145838 I tried to do this earlier in a different way, and it got merged, but it ran into issues downstream with projects like torchrec and executorch and it got reverted, and I could not debug it locally. This might well work, it's a less invasive. Number one thing - it's hard to read the diffs, and this is a pretty sensitive file. Could I convince you to re-order the method definitions in _tensor.py so they are in the order same as before, so it's easy to compare the old and new versions of each method? Secondarily, in answer to your question, I think that using The documentation on Excited to see how this goes! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A lot of the non-type changes here are not ok. You should NOT change existing logic.
Going to my original question:
Would you prefer that I just use |
Well, I was going from a previous code review adding typing where I had the same issue - a scalar tensor being used as a I myself interpreted the "logic changes" to refer to the It's... unfortunate that my previous attempt got reverted because it broke downstream products like executorch, but I wasn't given a traceback I could use, and I couldn't manage to get executorch to build with the commit ID (we had this issue before as well, apparently this month's executorch build is much easier to get working though). |
Ho sorry @alanhdu I didn't realize there was a lot of discussion here and just looked at the diff. Doing things like .item(), wrapping numbers into Tensors, etc have very subtle implications that need very very careful review. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The change sounds ok in term of non-typing behavior now. Thanks for the update!
Given the amount of skips, should we expect many users doing type checking of their code will see errors after this?
In particular, a lot of the code where you added ignore actually work today. So it's typing that is too restrictive right?
assert_type(BOOL / TENSOR, Any) | ||
assert_type(FLOAT / TENSOR, Any) | ||
assert_type(INT / TENSOR, Any) | ||
assert_type(BOOL // TENSOR, Tensor) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the comment above can be removed now that these are fixed!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the comment still needs to be there, because the __rmod__
, __rlshift__
and __rrshift__
still turn to Any
(e.g. INT % TENSOR
). I haven't figured out exactly why their behavior is different when I try to move the implementations into the if not TYPE_CHECKING
block (if I move them, then they resolve to int
isntead of Tensor
for some reason...)
Yeah, there will probably be some user type errors downstream. Of the
|
Help for users in the release notes?I think we should help out your average-practitioner end user who gets new type-checking errors in an existing code base that "already works" - by giving them some help in the release notes. A typical example:
Before this change, Consider a new type error in the user's system coming from this one line: somewhere else in the end user code, type checking finds that some variable Some possibilities:
Case 1 might will work "every time" (as long as you are "sure" that the result is always a scalar tensor) and it's something we do sometimes in the pytorch codebase. Calling Case 2 is a latent trap even if it works. They could use the Python Array API instead, or construct a new instance of the correct class, or their own Case 3 is a catchall, there might not be much to say. Case 4 is the very reason we have typing, to find wrong code. |
I think that's fair. Agreed that some guidance in the release notes might make sense. Is that something I need to do in this PR? I checked the I agree that case 1 and case 2 are the most likely (since they are places where at runtime things will generally work out). I agree that having explicit casts (e.g. |
To be honest, I actually have no idea how the release notes are prepared, but I know they aren't the responsibility of the person making the pull request! I figured I'd leave notes here in case they were useful to someone. |
In the build system, we generate a `torch/_C/__init__.pyi` that contains typehints of the base `TensorBase` that `torch.Tensor` inherits from. That contains a bunch of type-annotations for annotating these dunder methods. Unfortunately, by defining them here, these are being automatically overwritten and "hidden", leading to a bunch of confusing type-errors like ```python def inv(x: torch.Tensor): # Unsupported operand [58]: `/` is not supported for operand types `int` and `torch._tensor.Tensor`. 1 / x ``` This modifies the code to use the *runtime* behavior of these functions but to fall back on the `.pyi` annotations at type-checking time.
I added a wrong tag, At least someone will see it! |
In the build system, we generate a
torch/_C/__init__.pyi
that contains typehints of the baseTensorBase
thattorch.Tensor
inherits from. That contains a bunch of type-annotations for annotating these dunder methods.Unfortunately, by defining them here, these are being automatically overwritten and "hidden", leading to a bunch of confusing type-errors like
This modifies the code to use the runtime behavior of these functions but to fall back on the
.pyi
annotations at type-checking time.cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k @ezyang @malfet @xuzhao9 @gramster