Thanks to visit codestin.com
Credit goes to github.com

Skip to content

MAINT: Remove any promotion-state switching logic #27397

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions doc/release/upcoming_changes/27156.change.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
NEP 50 promotion state option removed
-------------------------------------
The NEP 50 promotion state settings are now removed. They were always
meant as temporary means for testing.
A warning will be given if the environment variable is set to anything
but ``NPY_PROMOTION_STATE=weak`` wile ``_set_promotion_state``
and ``_get_promotion_state`` are removed.
In case code used ``_no_nep50_warning``, a ``contextlib.nullcontext``
could be used to replace it when not available.
38 changes: 12 additions & 26 deletions doc/source/reference/c-api/array.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1089,14 +1089,13 @@ Converting data types
returned when the value will not overflow or be truncated to
an integer when converting to a smaller type.

This is almost the same as the result of
PyArray_CanCastTypeTo(PyArray_MinScalarType(arr), totype, casting),
but it also handles a special case arising because the set
of uint values is not a subset of the int values for types with the
same number of bits.

.. c:function:: PyArray_Descr* PyArray_MinScalarType(PyArrayObject* arr)

.. note::
With the adoption of NEP 50 in NumPy 2, this function is not used
internally. It is currently provided for backwards compatibility,
but expected to be eventually deprecated.

.. versionadded:: 1.6

If *arr* is an array, returns its data type descriptor, but if
Expand Down Expand Up @@ -1134,8 +1133,7 @@ Converting data types

.. c:function:: int PyArray_ObjectType(PyObject* op, int mintype)

This function is superseded by :c:func:`PyArray_MinScalarType` and/or
:c:func:`PyArray_ResultType`.
This function is superseded by :c:func:`PyArray_ResultType`.

This function is useful for determining a common type that two or
more arrays can be converted to. It only works for non-flexible
Expand Down Expand Up @@ -3250,30 +3248,18 @@ Array scalars
.. c:function:: NPY_SCALARKIND PyArray_ScalarKind( \
int typenum, PyArrayObject** arr)

See the function :c:func:`PyArray_MinScalarType` for an alternative
mechanism introduced in NumPy 1.6.0.
Legacy way to query special promotion for scalar values. This is not
used in NumPy itself anymore and is expected to be deprecated eventually.

Return the kind of scalar represented by *typenum* and the array
in *\*arr* (if *arr* is not ``NULL`` ). The array is assumed to be
rank-0 and only used if *typenum* represents a signed integer. If
*arr* is not ``NULL`` and the first element is negative then
:c:data:`NPY_INTNEG_SCALAR` is returned, otherwise
:c:data:`NPY_INTPOS_SCALAR` is returned. The possible return values
are the enumerated values in :c:type:`NPY_SCALARKIND`.
New DTypes can define promotion rules specific to Python scalars.

.. c:function:: int PyArray_CanCoerceScalar( \
char thistype, char neededtype, NPY_SCALARKIND scalar)

See the function :c:func:`PyArray_ResultType` for details of
NumPy type promotion, updated in NumPy 1.6.0.
Legacy way to query special promotion for scalar values. This is not
used in NumPy itself anymore and is expected to be deprecated eventually.

Implements the rules for scalar coercion. Scalars are only
silently coerced from thistype to neededtype if this function
returns nonzero. If scalar is :c:data:`NPY_NOSCALAR`, then this
function is equivalent to :c:func:`PyArray_CanCastSafely`. The rule is
that scalars of the same KIND can be coerced into arrays of the
same KIND. This rule means that high-precision scalars will never
cause low-precision arrays of the same KIND to be upcast.
Use ``PyArray_ResultType`` for similar purposes.


Data-type descriptors
Expand Down
11 changes: 7 additions & 4 deletions numpy/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,8 +120,8 @@

from . import _core
from ._core import (
False_, ScalarType, True_, _get_promotion_state, _no_nep50_warning,
_set_promotion_state, abs, absolute, acos, acosh, add, all, allclose,
False_, ScalarType, True_,
abs, absolute, acos, acosh, add, all, allclose,
amax, amin, any, arange, arccos, arccosh, arcsin, arcsinh,
arctan, arctan2, arctanh, argmax, argmin, argpartition, argsort,
argwhere, around, array, array2string, array_equal, array_equiv,
Expand Down Expand Up @@ -529,8 +529,11 @@ def hugepage_setup():
_core.multiarray._multiarray_umath._reload_guard()

# TODO: Remove the environment variable entirely now that it is "weak"
_core._set_promotion_state(
os.environ.get("NPY_PROMOTION_STATE", "weak"))
if (os.environ.get("NPY_PROMOTION_STATE", "weak") != "weak"):
warnings.warn(
"NPY_PROMOTION_STATE was a temporary feature for NumPy 2.0 "
"transition and is ignored after NumPy 2.2.",
UserWarning, stacklevel=2)

# Tell PyInstaller where to find hook-numpy.py
def _pyinstaller_hooks_dir():
Expand Down
5 changes: 0 additions & 5 deletions numpy/__init__.pyi
Original file line number Diff line number Diff line change
Expand Up @@ -3533,11 +3533,6 @@ class errstate:
) -> None: ...
def __call__(self, func: _CallType) -> _CallType: ...

@contextmanager
def _no_nep50_warning() -> Generator[None, None, None]: ...
def _get_promotion_state() -> str: ...
def _set_promotion_state(state: str, /) -> None: ...

_ScalarType_co = TypeVar("_ScalarType_co", bound=generic, covariant=True)

class ndenumerate(Generic[_ScalarType_co]):
Expand Down
16 changes: 6 additions & 10 deletions numpy/_core/_methods.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
from numpy._core.multiarray import asanyarray
from numpy._core import numerictypes as nt
from numpy._core import _exceptions
from numpy._core._ufunc_config import _no_nep50_warning
from numpy._globals import _NoValue

# save those O(100) nanoseconds!
Expand Down Expand Up @@ -135,9 +134,8 @@ def _mean(a, axis=None, dtype=None, out=None, keepdims=False, *, where=True):

ret = umr_sum(arr, axis, dtype, out, keepdims, where=where)
if isinstance(ret, mu.ndarray):
with _no_nep50_warning():
ret = um.true_divide(
ret, rcount, out=ret, casting='unsafe', subok=False)
ret = um.true_divide(
ret, rcount, out=ret, casting='unsafe', subok=False)
if is_float16_result and out is None:
ret = arr.dtype.type(ret)
elif hasattr(ret, 'dtype'):
Expand Down Expand Up @@ -180,9 +178,8 @@ def _var(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *,
# matching rcount to arrmean when where is specified as array
div = rcount.reshape(arrmean.shape)
if isinstance(arrmean, mu.ndarray):
with _no_nep50_warning():
arrmean = um.true_divide(arrmean, div, out=arrmean,
casting='unsafe', subok=False)
arrmean = um.true_divide(arrmean, div, out=arrmean,
casting='unsafe', subok=False)
elif hasattr(arrmean, "dtype"):
arrmean = arrmean.dtype.type(arrmean / rcount)
else:
Expand Down Expand Up @@ -212,9 +209,8 @@ def _var(a, axis=None, dtype=None, out=None, ddof=0, keepdims=False, *,

# divide by degrees of freedom
if isinstance(ret, mu.ndarray):
with _no_nep50_warning():
ret = um.true_divide(
ret, rcount, out=ret, casting='unsafe', subok=False)
ret = um.true_divide(
ret, rcount, out=ret, casting='unsafe', subok=False)
elif hasattr(ret, 'dtype'):
ret = ret.dtype.type(ret / rcount)
else:
Expand Down
21 changes: 1 addition & 20 deletions numpy/_core/_ufunc_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@

__all__ = [
"seterr", "geterr", "setbufsize", "getbufsize", "seterrcall", "geterrcall",
"errstate", '_no_nep50_warning'
"errstate"
]


Expand Down Expand Up @@ -482,22 +482,3 @@ def inner(*args, **kwargs):
_extobj_contextvar.reset(_token)

return inner


NO_NEP50_WARNING = contextvars.ContextVar("_no_nep50_warning", default=False)

@set_module('numpy')
@contextlib.contextmanager
def _no_nep50_warning():
"""
Context manager to disable NEP 50 warnings. This context manager is
only relevant if the NEP 50 warnings are enabled globally (which is not
thread/context safe).

This warning context manager itself is fully safe, however.
"""
token = NO_NEP50_WARNING.set(True)
try:
yield
finally:
NO_NEP50_WARNING.reset(token)
6 changes: 1 addition & 5 deletions numpy/_core/multiarray.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@
_flagdict, from_dlpack, _place, _reconstruct,
_vec_string, _ARRAY_API, _monotonicity, _get_ndarray_c_version,
_get_madvise_hugepage, _set_madvise_hugepage,
_get_promotion_state, _set_promotion_state
)

__all__ = [
Expand All @@ -40,8 +39,7 @@
'normalize_axis_index', 'packbits', 'promote_types', 'putmask',
'ravel_multi_index', 'result_type', 'scalar', 'set_datetimeparse_function',
'set_typeDict', 'shares_memory', 'typeinfo',
'unpackbits', 'unravel_index', 'vdot', 'where', 'zeros',
'_get_promotion_state', '_set_promotion_state']
'unpackbits', 'unravel_index', 'vdot', 'where', 'zeros']

# For backward compatibility, make sure pickle imports
# these functions from here
Expand All @@ -67,8 +65,6 @@
nested_iters.__module__ = 'numpy'
promote_types.__module__ = 'numpy'
zeros.__module__ = 'numpy'
_get_promotion_state.__module__ = 'numpy'
_set_promotion_state.__module__ = 'numpy'
normalize_axis_index.__module__ = 'numpy.lib.array_utils'


Expand Down
9 changes: 4 additions & 5 deletions numpy/_core/numeric.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@
empty, empty_like, flatiter, frombuffer, from_dlpack, fromfile, fromiter,
fromstring, inner, lexsort, matmul, may_share_memory, min_scalar_type,
ndarray, nditer, nested_iters, promote_types, putmask, result_type,
shares_memory, vdot, where, zeros, normalize_axis_index,
_get_promotion_state, _set_promotion_state, vecdot
shares_memory, vdot, where, zeros, normalize_axis_index, vecdot
)

from . import overrides
Expand All @@ -28,7 +27,7 @@
from .umath import (multiply, invert, sin, PINF, NAN)
from . import numerictypes
from ..exceptions import AxisError
from ._ufunc_config import errstate, _no_nep50_warning
from ._ufunc_config import errstate

bitwise_not = invert
ufunc = type(sin)
Expand All @@ -53,7 +52,7 @@
'identity', 'allclose', 'putmask',
'flatnonzero', 'inf', 'nan', 'False_', 'True_', 'bitwise_not',
'full', 'full_like', 'matmul', 'vecdot', 'shares_memory',
'may_share_memory', '_get_promotion_state', '_set_promotion_state']
'may_share_memory']


def _zeros_like_dispatcher(
Expand Down Expand Up @@ -2457,7 +2456,7 @@ def isclose(a, b, rtol=1.e-5, atol=1.e-8, equal_nan=False):
elif isinstance(y, int):
y = float(y)

with errstate(invalid='ignore'), _no_nep50_warning():
with errstate(invalid='ignore'):
result = (less_equal(abs(x-y), atol + rtol * abs(y))
& isfinite(y)
| (x == y))
Expand Down
39 changes: 4 additions & 35 deletions numpy/_core/src/multiarray/arraytypes.c.src
Original file line number Diff line number Diff line change
Expand Up @@ -275,41 +275,10 @@ static int
#endif
) {
PyArray_Descr *descr = PyArray_DescrFromType(NPY_@TYPE@);
int promotion_state = get_npy_promotion_state();
if (promotion_state == NPY_USE_LEGACY_PROMOTION || (
promotion_state == NPY_USE_WEAK_PROMOTION_AND_WARN
&& !npy_give_promotion_warnings())) {
/*
* This path will be taken both for the "promotion" case such as
* `uint8_arr + 123` as well as the assignment case.
* The "legacy" path should only ever be taken for assignment
* (legacy promotion will prevent overflows by promoting up)
* so a normal deprecation makes sense.
* When weak promotion is active, we use "future" behavior unless
* warnings were explicitly opt-in.
*/
if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1,
"NumPy will stop allowing conversion of out-of-bound "
"Python integers to integer arrays. The conversion "
"of %.100R to %S will fail in the future.\n"
"For the old behavior, usually:\n"
" np.array(value).astype(dtype)\n"
"will give the desired result (the cast overflows).",
obj, descr) < 0) {
Py_DECREF(descr);
return -1;
}
Py_DECREF(descr);
return 0;
}
else {
/* Live in the future, outright error: */
PyErr_Format(PyExc_OverflowError,
"Python integer %R out of bounds for %S", obj, descr);
Py_DECREF(descr);
return -1;
}
assert(0);
PyErr_Format(PyExc_OverflowError,
"Python integer %R out of bounds for %S", obj, descr);
Py_DECREF(descr);
return -1;
}
return 0;
}
Expand Down
Loading
Loading