Thanks to visit codestin.com
Credit goes to github.com

Skip to content

TYP: numpy._NoValue #29170

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 11, 2025
Merged

TYP: numpy._NoValue #29170

merged 1 commit into from
Jun 11, 2025

Conversation

jorenham
Copy link
Member

@jorenham jorenham commented Jun 11, 2025

This adds implicit re-exports for numpy._globals._NoValue and _CopyMode to __init__.pyi. They're available at runtime, so the stubs should reflect that (even though it's not public API), IMO.

This gets rid of a bunch of red sguigglies in the internal python codebase, e.g. in lib/_function_base_impl.py there are references to _NoValue.

Copy link

Diff from mypy_primer, showing the effect of this PR on type check results on a corpus of open source code:

spark (https://github.com/apache/spark)
+ python/pyspark/pandas/series.py:1170: error: Unused "type: ignore" comment  [unused-ignore]
+ python/pyspark/pandas/series.py:1170: error: No overload variant of "__getitem__" of "Series" matches argument type "_NoValueType"  [call-overload]
+ python/pyspark/pandas/series.py:1170: note: Error code "call-overload" not covered by "type: ignore" comment
+ python/pyspark/pandas/series.py:1170: note: Possible overload variants:
+ python/pyspark/pandas/series.py:1170: note:     def __getitem__(self, list[str] | Index[Any] | Series[float] | slice[Any, Any, Any] | Series[builtins.bool] | ndarray[tuple[Any, ...], dtype[numpy.bool[builtins.bool]]] | list[builtins.bool] | tuple[Hashable | slice[Any, Any, Any], ...], /) -> Series[float]
+ python/pyspark/pandas/series.py:1170: note:     def __getitem__(self, str | bytes | date | datetime | timedelta | <7 more items> | complex | integer[Any] | floating[Any] | complexfloating[Any, Any], /) -> float
+ python/pyspark/pandas/series.py:1172: error: Unused "type: ignore" comment  [unused-ignore]

@jorenham
Copy link
Member Author

jorenham commented Jun 11, 2025

Here's the spark code that caused the primer diff (lines 1170..=1172):

                tmp_val = arg[np._NoValue]  # type: ignore[attr-defined]
                # Remove in case it's set in defaultdict.
                del arg[np._NoValue]  # type: ignore[attr-defined]

https://github.com/apache/spark/blob/c999d2e3185b0d86bb3fdf723e6ab93864913f85/python/pyspark/pandas/series.py#L1170-L1172

The new error (and the two resolved ones) make sense to me (since a pd.Series should not be indexed by a np._NoValue), so it's nothing to worry about.

@charris charris merged commit 1512865 into numpy:main Jun 11, 2025
77 checks passed
@charris
Copy link
Member

charris commented Jun 11, 2025

Thanks Joren.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants