Building blocks for precise & flexible type hints
Optype is available as optype on PyPI:
pip install optypeFor optional NumPy support, ensure that you use the optype[numpy] extra.
This ensures that the installed numpy and the required numpy-typing-compat
versions are compatible with each other.
pip install "optype[numpy]"See the optype.numpy docs for more info.
Optype can also be installed with conda from the conda-forge channel:
conda install conda-forge::optypeIf you want to use optype.numpy, you should instead install
optype-numpy:
conda install conda-forge::optype-numpyLet's say you're writing a twice(x) function, that evaluates 2 * x.
Implementing it is trivial, but what about the type annotations?
Because twice(2) == 4, twice(3.14) == 6.28 and twice('I') = 'II', it
might seem like a good idea to type it as twice[T](x: T) -> T: ....
However, that wouldn't include cases such as twice(True) == 2 or
twice((42, True)) == (42, True, 42, True), where the input- and output types
differ.
Moreover, twice should accept any type with a custom __rmul__ method
that accepts 2 as argument.
This is where optype comes in handy, which has single-method protocols for
all the builtin special methods.
For twice, we can use optype.CanRMul[T, R], which, as the name suggests,
is a protocol with (only) the def __rmul__(self, lhs: T) -> R: ... method.
With this, the twice function can written as:
| Python 3.11 | Python 3.12+ |
|---|---|
from typing import Literal
from typing import TypeAlias, TypeVar
from optype import CanRMul
R = TypeVar("R")
Two: TypeAlias = Literal[2]
RMul2: TypeAlias = CanRMul[Two, R]
def twice(x: RMul2[R]) -> R:
return 2 * x |
from typing import Literal
from optype import CanRMul
type Two = Literal[2]
type RMul2[R] = CanRMul[Two, R]
def twice[R](x: RMul2[R]) -> R:
return 2 * x |
But what about types that implement __add__ but not __radd__?
In this case, we could return x * 2 as fallback (assuming commutativity).
Because the optype.Can* protocols are runtime-checkable, the revised
twice2 function can be compactly written as:
| Python 3.11 | Python 3.12+ |
|---|---|
from optype import CanMul
Mul2: TypeAlias = CanMul[Two, R]
CMul2: TypeAlias = Mul2[R] | RMul2[R]
def twice2(x: CMul2[R]) -> R:
if isinstance(x, CanRMul):
return 2 * x
else:
return x * 2 |
from optype import CanMul
type Mul2[R] = CanMul[Two, R]
type CMul2[R] = Mul2[R] | RMul2[R]
def twice2[R](x: CMul2[R]) -> R:
if isinstance(x, CanRMul):
return 2 * x
else:
return x * 2 |
See examples/twice.py for the full example.
The API of optype is flat; a single import optype as opt is all you need
(except for optype.numpy).
optypeoptype.copyoptype.dataclassesoptype.inspectoptype.iooptype.jsonoptype.pickleoptype.stringoptype.typingoptype.dlpackoptype.numpy
There are five flavors of things that live within optype,
- The
optype.Just[T]and itsoptype.Just{Int,Float,Complex}subtypes only accept instances of the type itself, while rejecting instances of strict subtypes. This can be used to e.g. work around thefloatandcomplextype promotions, annotatingobject()sentinels withJust[object], rejectingboolin functions that acceptint, etc. optype.Can{}types describe what can be done with it. For instance, anyCanAbs[T]type can be used as argument to theabs()builtin function with return typeT. MostCan{}implement a single special method, whose name directly matches that of the type.CanAbsimplements__abs__,CanAddimplements__add__, etc.optype.Has{}is the analogue ofCan{}, but for special attributes.HasNamehas a__name__attribute,HasDicthas a__dict__, etc.optype.Does{}describe the type of operators. SoDoesAbsis the type of theabs({})builtin function, andDoesPosthe type of the+{}prefix operator.optype.do_{}are the correctly-typed implementations ofDoes{}. For eachdo_{}there is aDoes{}, and vice-versa. Sodo_abs: DoesAbsis the typed alias ofabs({}), anddo_pos: DoesPosis a typed version ofoperator.pos. Theoptype.do_operators are more complete thanoperators, have runtime-accessible type annotations, and have names you don't need to know by heart.
The reference docs are structured as follows:
All typing protocols here live in the root optype namespace.
They are runtime-checkable so that you can do e.g.
isinstance('snail', optype.CanAdd), in case you want to check whether
snail implements __add__.
Unlikecollections.abc, optype's protocols aren't abstract base classes,
i.e. they don't extend abc.ABC, only typing.Protocol.
This allows the optype protocols to be used as building blocks for .pyi
type stubs.
Just is an invariant type "wrapper", where Just[T] only accepts instances of T,
and rejects instances of any strict subtypes of T.
Note that e.g. Literal[""] and LiteralString are not a strict str subtypes,
and are therefore assignable to Just[str], but instances of class S(str): ...
are not assignable to Just[str].
Disallow passing bool as int:
import optype as op
def assert_int(x: op.Just[int]) -> int:
assert type(x) is int
return x
assert_int(42) # ok
assert_int(False) # rejectedAnnotating a sentinel:
import optype as op
_DEFAULT = object()
def intmap(
value: int,
# same as `dict[int, int] | op.Just[object]`
mapping: dict[int, int] | op.JustObject = _DEFAULT,
/,
) -> int:
# same as `type(mapping) is object`
if isinstance(mapping, op.JustObject):
return value
return mapping[value]
intmap(1) # ok
intmap(1, {1: 42}) # ok
intmap(1, "some object") # rejectedTip
The Just{Bytes,Int,Float,Complex,Date,Object} protocols are runtime-checkable,
so that instance(42, JustInt) is True and instance(bool(), JustInt) is False.
It's implemented through meta-classes, and type-checkers have no problem with it.
optype type |
accepts instances of |
|---|---|
Just[T] |
T |
JustInt |
builtins.int |
JustFloat |
builtins.float |
JustComplex |
builtins.complex |
JustBytes |
builtins.bytes |
JustObject |
builtins.object |
JustDate |
datetime.date |
The return type of these special methods is invariant. Python will raise an
error if some other (sub)type is returned.
This is why these optype interfaces don't accept generic type arguments.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
complex(_) |
do_complex |
DoesComplex |
__complex__ |
CanComplex |
float(_) |
do_float |
DoesFloat |
__float__ |
CanFloat |
int(_) |
do_int |
DoesInt |
__int__ |
CanInt[+R: int = int] |
bool(_) |
do_bool |
DoesBool |
__bool__ |
CanBool[+R: bool = bool] |
bytes(_) |
do_bytes |
DoesBytes |
__bytes__ |
CanBytes[+R: bytes = bytes] |
str(_) |
do_str |
DoesStr |
__str__ |
CanStr[+R: str = str] |
Note
The Can* interfaces of the types that can used as typing.Literal
accept an optional type parameter R.
This can be used to indicate a literal return type,
for surgically precise typing, e.g. None, True, and 42 are
instances of CanBool[Literal[False]], CanInt[Literal[1]], and
CanStr[Literal['42']], respectively.
These formatting methods are allowed to return instances that are a subtype
of the str builtin. The same holds for the __format__ argument.
So if you're a 10x developer that wants to hack Python's f-strings, but only
if your type hints are spot-on; optype is you friend.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
repr(_) |
do_repr |
DoesRepr |
__repr__ |
CanRepr[+R:str = str] |
format(_, x) |
do_format |
DoesFormat |
__format__ |
CanFormat[-T:str = str, +R:str = str] |
Additionally, optype provides protocols for types with (custom) hash or
index methods:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
hash(_) |
do_hash |
DoesHash |
__hash__ |
CanHash |
_.__index__()
(docs)
|
do_index |
DoesIndex |
__index__ |
CanIndex[+R: int = int] |
The "rich" comparison special methods often return a bool.
However, instances of any type can be returned (e.g. a numpy array).
This is why the corresponding optype.Can* interfaces accept a second type
argument for the return type, that defaults to bool when omitted.
The first type parameter matches the passed method argument, i.e. the
right-hand side operand, denoted here as x.
| operator | operand | ||||
|---|---|---|---|---|---|
| expression | reflected | function | type | method | type |
_ == x |
x == _ |
do_eq |
DoesEq |
__eq__ |
CanEq[-T = object, +R = bool] |
_ != x |
x != _ |
do_ne |
DoesNe |
__ne__ |
CanNe[-T = object, +R = bool] |
_ < x |
x > _ |
do_lt |
DoesLt |
__lt__ |
CanLt[-T, +R = bool] |
_ <= x |
x >= _ |
do_le |
DoesLe |
__le__ |
CanLe[-T, +R = bool] |
_ > x |
x < _ |
do_gt |
DoesGt |
__gt__ |
CanGt[-T, +R = bool] |
_ >= x |
x <= _ |
do_ge |
DoesGe |
__ge__ |
CanGe[-T, +R = bool] |
In the Python docs, these are referred to as "arithmetic operations". But the operands aren't limited to numeric types, and because the operations aren't required to be commutative, might be non-deterministic, and could have side-effects. Classifying them "arithmetic" is, at the very least, a bit of a stretch.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
_ + x |
do_add |
DoesAdd |
__add__ |
CanAdd[-T, +R = T]CanAddSelf[-T]CanAddSame[-T?, +R?]
|
_ - x |
do_sub |
DoesSub |
__sub__ |
CanSub[-T, +R = T]CanSubSelf[-T]CanSubSame[-T?, +R?]
|
_ * x |
do_mul |
DoesMul |
__mul__ |
CanMul[-T, +R = T]CanMulSelf[-T]CanMulSame[-T?, +R?]
|
_ @ x |
do_matmul |
DoesMatmul |
__matmul__ |
CanMatmul[-T, +R = T]CanMatmulSelf[-T]CanMatmulSame[-T?, +R?]
|
_ / x |
do_truediv |
DoesTruediv |
__truediv__ |
CanTruediv[-T, +R = T]CanTruedivSelf[-T]CanTruedivSame[-T?, +R?]
|
_ // x |
do_floordiv |
DoesFloordiv |
__floordiv__ |
CanFloordiv[-T, +R = T]CanFloordivSelf[-T]CanFloordivSame[-T?, +R?]
|
_ % x |
do_mod |
DoesMod |
__mod__ |
CanMod[-T, +R = T]CanModSelf[-T]CanModSame[-T?, +R?]
|
divmod(_, x) |
do_divmod |
DoesDivmod |
__divmod__ |
CanDivmod[-T, +R] |
_ ** xpow(_, x)
|
do_pow/2 |
DoesPow |
__pow__ |
CanPow2[-T, +R = T]CanPowSelf[-T]CanPowSame[-T?, +R?]
|
pow(_, x, m) |
do_pow/3 |
DoesPow |
__pow__ |
CanPow3[-T, -M, +R = int] |
_ << x |
do_lshift |
DoesLshift |
__lshift__ |
CanLshift[-T, +R = T]CanLshiftSelf[-T]CanLshiftSame[-T?, +R?]
|
_ >> x |
do_rshift |
DoesRshift |
__rshift__ |
CanRshift[-T, +R = T]CanRshiftSelf[-T]CanRshiftSame[-T?, +R?]
|
_ & x |
do_and |
DoesAnd |
__and__ |
CanAnd[-T, +R = T]CanAndSelf[-T]CanAndSame[-T?, +R?]
|
_ ^ x |
do_xor |
DoesXor |
__xor__ |
CanXor[-T, +R = T]CanXorSelf[-T]CanXorSame[-T?, +R?]
|
_ | x |
do_or |
DoesOr |
__or__ |
CanOr[-T, +R = T]CanOrSelf[-T]CanOrSame[-T?, +R?]
|
Tip
Because pow() can take an optional third argument, optype
provides separate interfaces for pow() with two and three arguments.
Additionally, there is the overloaded intersection type
type CanPow[-T, -M, +R, +RM] = CanPow2[T, R] & CanPow3[T, M, RM], as interface
for types that can take an optional third argument.
Note
The Can*Self protocols method return typing.Self and optionally accept T and
R. The Can*Same protocols also return Self, but instead accept Self | T, with
T and R optional generic type parameters that default to typing.Never.
To illustrate, CanAddSelf[T] implements __add__ as (self, rhs: T, /) -> Self,
while CanAddSame[T, R] implements it as (self, rhs: Self | T, /) -> Self | R, and
CanAddSame (without T and R) as (self, rhs: Self, /) -> Self.
For the binary infix operators above, optype additionally provides
interfaces with reflected (swapped) operands, e.g. __radd__ is a reflected
__add__.
They are named like the original, but prefixed with CanR prefix, i.e.
__name__.replace('Can', 'CanR').
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
x + _ |
do_radd |
DoesRAdd |
__radd__ |
CanRAdd[-T, +R=T]CanRAddSelf[-T]
|
x - _ |
do_rsub |
DoesRSub |
__rsub__ |
CanRSub[-T, +R=T]CanRSubSelf[-T]
|
x * _ |
do_rmul |
DoesRMul |
__rmul__ |
CanRMul[-T, +R=T]CanRMulSelf[-T]
|
x @ _ |
do_rmatmul |
DoesRMatmul |
__rmatmul__ |
CanRMatmul[-T, +R=T]CanRMatmulSelf[-T]
|
x / _ |
do_rtruediv |
DoesRTruediv |
__rtruediv__ |
CanRTruediv[-T, +R=T]CanRTruedivSelf[-T]
|
x // _ |
do_rfloordiv |
DoesRFloordiv |
__rfloordiv__ |
CanRFloordiv[-T, +R=T]CanRFloordivSelf[-T]
|
x % _ |
do_rmod |
DoesRMod |
__rmod__ |
CanRMod[-T, +R=T]CanRModSelf[-T]
|
divmod(x, _) |
do_rdivmod |
DoesRDivmod |
__rdivmod__ |
CanRDivmod[-T, +R] |
x ** _pow(x, _)
|
do_rpow |
DoesRPow |
__rpow__ |
CanRPow[-T, +R=T]CanRPowSelf[-T]
|
x << _ |
do_rlshift |
DoesRLshift |
__rlshift__ |
CanRLshift[-T, +R=T]CanRLshiftSelf[-T]
|
x >> _ |
do_rrshift |
DoesRRshift |
__rrshift__ |
CanRRshift[-T, +R=T]CanRRshiftSelf[-T]
|
x & _ |
do_rand |
DoesRAnd |
__rand__ |
CanRAnd[-T, +R=T]CanRAndSelf[-T]
|
x ^ _ |
do_rxor |
DoesRXor |
__rxor__ |
CanRXor[-T, +R=T]CanRXorSelf[-T]
|
x | _ |
do_ror |
DoesROr |
__ror__ |
CanROr[-T, +R=T]CanROrSelf[-T]
|
Note
CanRPow corresponds to CanPow2; the 3-parameter "modulo" pow does not
reflect in Python.
According to the relevant python docs:
Note that ternary
pow()will not try calling__rpow__()(the coercion rules would become too complicated).
Similar to the reflected ops, the inplace/augmented ops are prefixed with
CanI, namely:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | types |
_ += x |
do_iadd |
DoesIAdd |
__iadd__ |
CanIAdd[-T, +R]CanIAddSelf[-T]CanIAddSame[-T?]
|
_ -= x |
do_isub |
DoesISub |
__isub__ |
CanISub[-T, +R]CanISubSelf[-T]CanISubSame[-T?]
|
_ *= x |
do_imul |
DoesIMul |
__imul__ |
CanIMul[-T, +R]CanIMulSelf[-T]CanIMulSame[-T?]
|
_ @= x |
do_imatmul |
DoesIMatmul |
__imatmul__ |
CanIMatmul[-T, +R]CanIMatmulSelf[-T]CanIMatmulSame[-T?]
|
_ /= x |
do_itruediv |
DoesITruediv |
__itruediv__ |
CanITruediv[-T, +R]CanITruedivSelf[-T]CanITruedivSame[-T?]
|
_ //= x |
do_ifloordiv |
DoesIFloordiv |
__ifloordiv__ |
CanIFloordiv[-T, +R]CanIFloordivSelf[-T]CanIFloordivSame[-T?]
|
_ %= x |
do_imod |
DoesIMod |
__imod__ |
CanIMod[-T, +R]CanIModSelf[-T]CanIModSame[-T?]
|
_ **= x |
do_ipow |
DoesIPow |
__ipow__ |
CanIPow[-T, +R]CanIPowSelf[-T]CanIPowSame[-T?]
|
_ <<= x |
do_ilshift |
DoesILshift |
__ilshift__ |
CanILshift[-T, +R]CanILshiftSelf[-T]CanILshiftSame[-T?]
|
_ >>= x |
do_irshift |
DoesIRshift |
__irshift__ |
CanIRshift[-T, +R]CanIRshiftSelf[-T]CanIRshiftSame[-T?]
|
_ &= x |
do_iand |
DoesIAnd |
__iand__ |
CanIAnd[-T, +R]CanIAndSelf[-T]CanIAndSame[-T?]
|
_ ^= x |
do_ixor |
DoesIXor |
__ixor__ |
CanIXor[-T, +R]CanIXorSelf[-T]CanIXorSame[-T?]
|
_ |= x |
do_ior |
DoesIOr |
__ior__ |
CanIOr[-T, +R]CanIOrSelf[-T]CanIOrSame[-T?]
|
These inplace operators usually return themselves (after some in-place mutation).
But unfortunately, it currently isn't possible to use Self for this (i.e.
something like type MyAlias[T] = optype.CanIAdd[T, Self] isn't allowed).
So to help ease this unbearable pain, optype comes equipped with ready-made
aliases for you to use. They bear the same name, with an additional *Self
suffix, e.g. optype.CanIAddSelf[T].
Note
The CanI*Self protocols method return typing.Self and optionally accept T. The
CanI*Same protocols also return Self, but instead accept rhs: Self | T. Since
T defaults to Never, it will accept rhs: Self | Never if T is not provided,
which is equivalent to rhs: Self.
Available since 0.12.1
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | types |
+_ |
do_pos |
DoesPos |
__pos__ |
CanPos[+R]CanPosSelf[+R?]
|
-_ |
do_neg |
DoesNeg |
__neg__ |
CanNeg[+R]CanNegSelf[+R?]
|
~_ |
do_invert |
DoesInvert |
__invert__ |
CanInvert[+R]CanInvertSelf[+R?]
|
abs(_) |
do_abs |
DoesAbs |
__abs__ |
CanAbs[+R]CanAbsSelf[+R?]
|
The Can*Self variants return -> Self instead of R. Since optype 0.12.1 these
also accept an optional R type parameter (with a default of Never), which, when
provided, will result in a return type of -> Self | R.
The round() built-in function takes an optional second argument.
From a typing perspective, round() has two overloads, one with 1 parameter,
and one with two.
For both overloads, optype provides separate operand interfaces:
CanRound1[R] and CanRound2[T, RT].
Additionally, optype also provides their (overloaded) intersection type:
CanRound[-T, +R1, +R2] = CanRound1[R1] & CanRound2[T, R2].
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
round(_) |
do_round/1 |
DoesRound |
__round__/1 |
CanRound1[+R=int] |
round(_, n) |
do_round/2 |
DoesRound |
__round__/2 |
CanRound2[-T=int, +R=float] |
round(_, n=...) |
do_round |
DoesRound |
__round__ |
CanRound[-T=int, +R1=int, +R2=float] |
For example, type-checkers will mark the following code as valid (tested with pyright in strict mode):
x: float = 3.14
x1: CanRound1[int] = x
x2: CanRound2[int, float] = x
x3: CanRound[int, int, float] = xFurthermore, there are the alternative rounding functions from the
math standard library:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
math.trunc(_) |
do_trunc |
DoesTrunc |
__trunc__ |
CanTrunc[+R=int] |
math.floor(_) |
do_floor |
DoesFloor |
__floor__ |
CanFloor[+R=int] |
math.ceil(_) |
do_ceil |
DoesCeil |
__ceil__ |
CanCeil[+R=int] |
Almost all implementations use int for R.
In fact, if no type for R is specified, it will default in int.
But technically speaking, these methods can be made to return anything.
Unlike operator, optype provides an operator for callable objects:
optype.do_call(f, *args. **kwargs).
CanCall is similar to collections.abc.Callable, but is runtime-checkable,
and doesn't use esoteric hacks.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
_(*args, **kwargs) |
do_call |
DoesCall |
__call__ |
CanCall[**Tss, +R] |
Note
Pyright (and probably other typecheckers) tend to accept
collections.abc.Callable in more places than optype.CanCall.
This could be related to the lack of co/contra-variance specification for
typing.ParamSpec (they should almost always be contravariant, but
currently they can only be invariant).
In case you encounter such a situation, please open an issue about it, so we can investigate further.
The operand x of iter(_) is within Python known as an iterable, which is
what collections.abc.Iterable[V] is often used for (e.g. as base class, or
for instance checking).
The optype analogue is CanIter[R], which as the name suggests,
also implements __iter__. But unlike Iterable[V], its type parameter R
binds to the return type of iter(_) -> R. This makes it possible to annotate
the specific type of the iterable that iter(_) returns. Iterable[V] is
only able to annotate the type of the iterated value. To see why that isn't
possible, see python/typing#548.
The collections.abc.Iterator[V] is even more awkward; it is a subtype of
Iterable[V]. For those familiar with collections.abc this might come as a
surprise, but an iterator only needs to implement __next__, __iter__ isn't
needed. This means that the Iterator[V] is unnecessarily restrictive.
Apart from that being theoretically "ugly", it has significant performance
implications, because the time-complexity of isinstance on a
typing.Protocol is abc.ABC usage is ignored,
collections.abc.Iterator is twice as slow as it needs to be.
That's one of the (many) reasons that optype.CanNext[V] and
optype.CanIter[R] are the better alternatives to Iterable and Iterator
from the abracadabra collections. This is how they are defined:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
next(_) |
do_next |
DoesNext |
__next__ |
CanNext[+V] |
iter(_) |
do_iter |
DoesIter |
__iter__ |
CanIter[+R: CanNext[object]] |
For the sake of compatibility with collections.abc, there is
optype.CanIterSelf[V], which is a protocol whose __iter__ returns
typing.Self, as well as a __next__ method that returns T.
I.e. it is equivalent to collections.abc.Iterator[V], but without the abc
nonsense.
The optype.CanAwait[R] is almost the same as collections.abc.Awaitable[R], except
that optype.CanAwait[R] is a pure interface, whereas Awaitable is
also an abstract base class (making it absolutely useless when writing stubs).
| operator | operand | |
|---|---|---|
| expression | method | type |
await _ |
__await__ |
CanAwait[+R] |
Yes, you guessed it right; the abracadabra collections made the exact same mistakes for the async iterablors (or was it "iteramblers"...?).
But fret not; the optype alternatives are right here:
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
anext(_) |
do_anext |
DoesANext |
__anext__ |
CanANext[+V] |
aiter(_) |
do_aiter |
DoesAIter |
__aiter__ |
CanAIter[+R: CanAnext[object]] |
But wait, shouldn't V be a CanAwait? Well, only if you don't want to get
fired...
Technically speaking, __anext__ can return any type, and anext will pass
it along without nagging. For details, see the discussion at python/typeshed#7491.
Just because something is legal, doesn't mean it's a good idea (don't eat the
yellow snow).
Additionally, there is optype.CanAIterSelf[R], with both the
__aiter__() -> Self and the __anext__() -> V methods.
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
len(_) |
do_len |
DoesLen |
__len__ |
CanLen[+R:int=int] |
_.__length_hint__()
(docs)
|
do_length_hint |
DoesLengthHint |
__length_hint__ |
CanLengthHint[+R:int=int] |
_[k] |
do_getitem |
DoesGetitem |
__getitem__ |
CanGetitem[-K, +V] |
_.__missing__()
(docs)
|
do_missing |
DoesMissing |
__missing__ |
CanMissing[-K, +D] |
_[k] = v |
do_setitem |
DoesSetitem |
__setitem__ |
CanSetitem[-K, -V] |
del _[k] |
do_delitem |
DoesDelitem |
__delitem__ |
CanDelitem[-K] |
k in _ |
do_contains |
DoesContains |
__contains__ |
CanContains[-K=object] |
reversed(_) |
do_reversed |
DoesReversed |
__reversed__ |
CanReversed[+R], orCanSequence[-I, +V, +N=int]
|
Because CanMissing[K, D] generally doesn't show itself without
CanGetitem[K, V] there to hold its hand, optype conveniently stitched them
together as optype.CanGetMissing[K, V, D=V].
Similarly, there is optype.CanSequence[K: CanIndex | slice, V], which is the
combination of both CanLen and CanItem[I, V], and serves as a more
specific and flexible collections.abc.Sequence[V].
| operator | operand | |||
|---|---|---|---|---|
| expression | function | type | method | type |
v = _.k orv = getattr(_, k)
|
do_getattr |
DoesGetattr |
__getattr__ |
CanGetattr[-K:str=str, +V=object] |
_.k = v orsetattr(_, k, v)
|
do_setattr |
DoesSetattr |
__setattr__ |
CanSetattr[-K:str=str, -V=object] |
del _.k ordelattr(_, k)
|
do_delattr |
DoesDelattr |
__delattr__ |
CanDelattr[-K:str=str] |
dir(_) |
do_dir |
DoesDir |
__dir__ |
CanDir[+R:CanIter[CanIterSelf[str]]] |
Support for the with statement.
| operator | operand | |
|---|---|---|
| expression | method(s) | type(s) |
__enter__ |
CanEnter[+C], or
CanEnterSelf
|
|
__exit__ |
CanExit[+R = None]
|
|
with _ as c: |
__enter__, and __exit__
|
CanWith[+C, +R = None], orCanWithSelf[+R = None]
|
CanEnterSelf and CanWithSelf are (runtime-checkable) aliases for
CanEnter[Self] and CanWith[Self, R], respectively.
For the async with statement the interfaces look very similar:
| operator | operand | |
|---|---|---|
| expression | method(s) | type(s) |
__aenter__ |
CanAEnter[+C], orCanAEnterSelf
|
|
__aexit__ |
CanAExit[+R = None] |
|
async with _ as c: |
__aenter__, and__aexit__
|
CanAsyncWith[+C, +R = None], orCanAsyncWithSelf[+R = None]
|
Interfaces for descriptors.
| operator | operand | |
|---|---|---|
| expression | method | type |
v: V = T().dvt: VT = T.d
|
__get__ |
CanGet[-T, +V, +VT=V] |
v: V = T().dvt: Self = T.d
|
__get__ |
CanGetSelf[-T, +V] |
T().k = v |
__set__ |
CanSet[-T, -V] |
del T().k |
__delete__ |
CanDelete[-T] |
class T: d = _ |
__set_name__ |
CanSetName[-T, -N: str = str] |
Interfaces for emulating buffer types using the buffer protocol.
| operator | operand | |
|---|---|---|
| expression | method | type |
v = memoryview(_) |
__buffer__ |
CanBuffer[-T: int = int] |
del v |
__release_buffer__ |
CanReleaseBuffer |
For the copy standard library, optype.copy provides the following
runtime-checkable interfaces:
copy standard library |
optype.copy |
|
|---|---|---|
| function | type | method |
copy.copy(_) -> R |
__copy__() -> R |
CanCopy[+R] |
copy.deepcopy(_, memo={}) -> R |
__deepcopy__(memo, /) -> R |
CanDeepcopy[+R] |
copy.replace(_, /, **changes) -> R
[1]
|
__replace__(**changes) -> R |
CanReplace[+R] |
[1] copy.replace requires python>=3.13
(but optype.copy.CanReplace doesn't)
In practice, it makes sense that a copy of an instance is the same type as the
original.
But because typing.Self cannot be used as a type argument, this difficult
to properly type.
Instead, you can use the optype.copy.Can{}Self types, which are the
runtime-checkable equivalents of the following (non-expressible) aliases:
type CanCopySelf = CanCopy[Self]
type CanDeepcopySelf = CanDeepcopy[Self]
type CanReplaceSelf = CanReplace[Self]For the dataclasses standard library, optype.dataclasses provides the
HasDataclassFields[V: Mapping[str, Field]] interface.
It can conveniently be used to check whether a type or instance is a
dataclass, i.e. isinstance(obj, HasDataclassFields).
A collection of functions for runtime inspection of types, modules, and other objects.
| Function | Description |
|---|---|
get_args(_) |
A better alternative to
Return a To illustrate one of the (many) issues with >>> from typing import Literal, TypeAlias, get_args
>>> Falsy: TypeAlias = Literal[None] | Literal[False, 0] | Literal["", b""]
>>> get_args(Falsy)
(typing.Literal[None], typing.Literal[False, 0], typing.Literal['', b''])But this is in direct contradiction with the official typing documentation:
So this is why >>> import optype as opt
>>> opt.inspect.get_args(Falsy)
(None, False, 0, '', b'')Another issue of >>> import typing
>>> import optype as opt
>>> type StringLike = str | bytes
>>> typing.get_args(StringLike)
()
>>> opt.inspect.get_args(StringLike)
(<class 'str'>, <class 'bytes'>)Clearly, |
get_protocol_members(_) |
A better alternative to
Returns a |
get_protocols(_) |
Returns a |
is_iterable(_) |
Check whether the object can be iterated over, i.e. if it can be used in a
|
is_final(_) |
Check if the type, method / classmethod / staticmethod / property, is
decorated with Note that a |
is_protocol(_) |
A backport of |
is_runtime_protocol(_) |
Check if the type expression is a runtime-protocol, i.e. a
|
is_union_type(_) |
Check if the type is a Unlike |
is_generic_alias(_) |
Check if the type is a subscripted type, e.g. Unlike Even though technically |
Note
All functions in optype.inspect also work for Python 3.12 type _ aliases
(i.e. types.TypeAliasType) and with typing.Annotated.
A collection of protocols and type-aliases that, unlike their analogues in _typeshed,
are accessible at runtime, and use a consistent naming scheme.
optype.io protocol |
implements | replaces |
|---|---|---|
CanFSPath[+T: str | bytes =] |
__fspath__: () -> T |
os.PathLike[AnyStr: (str, bytes)] |
CanRead[+T] |
read: () -> T |
|
CanReadN[+T] |
read: (int) -> T |
_typeshed.SupportsRead[+T] |
CanReadline[+T] |
readline: () -> T |
_typeshed.SupportsNoArgReadline[+T] |
CanReadlineN[+T] |
readline: (int) -> T |
_typeshed.SupportsReadline[+T] |
CanWrite[-T, +RT = object] |
write: (T) -> RT |
_typeshed.SupportsWrite[-T] |
CanFlush[+RT = object] |
flush: () -> RT |
_typeshed.SupportsFlush |
CanFileno |
fileno: () -> int |
_typeshed.HasFileno |
optype.io type alias |
expression | replaces |
|---|---|---|
ToPath[+T: str | bytes =] |
T | CanFSPath[T] |
_typeshed.StrPath_typeshed.BytesPath_typeshed.StrOrBytesPath_typeshed.GenericPath[AnyStr] |
ToFileno |
int | CanFileno |
_typeshed.FileDescriptorLike |
Type aliases for the json standard library:
Value |
AnyValue |
json.load(s) return type |
json.dumps(s) input type |
|---|---|
Array[V: Value = Value] |
AnyArray[~V: AnyValue = AnyValue] |
Object[V: Value = Value] |
AnyObject[~V: AnyValue = AnyValue] |
The (Any)Value can be any json input, i.e. Value | Array | Object is
equivalent to Value.
It's also worth noting that Value is a subtype of AnyValue, which means
that AnyValue | Value is equivalent to AnyValue.
For the pickle standard library, optype.pickle provides the following
interfaces:
| method(s) | signature (bound) | type |
|---|---|---|
__reduce__ |
() -> R |
CanReduce[+R: str | tuple =] |
__reduce_ex__ |
(CanIndex) -> R |
CanReduceEx[+R: str | tuple =] |
__getstate__ |
() -> S |
CanGetstate[+S] |
__setstate__ |
(S) -> None |
CanSetstate[-S] |
__getnewargs____new__
|
() -> tuple[V, ...](V) -> Self |
CanGetnewargs[+V] |
__getnewargs_ex____new__
|
() -> tuple[tuple[V, ...], dict[str, KV]](*tuple[V, ...], **dict[str, KV]) -> Self |
CanGetnewargsEx[+V, ~KV] |
The string standard
library contains practical constants, but it has two issues:
- The constants contain a collection of characters, but are represented as
a single string. This makes it practically impossible to type-hint the
individual characters, so typeshed currently types these constants as a
LiteralString. - The names of the constants are inconsistent, and doesn't follow PEP 8.
So instead, optype.string provides an alternative interface, that is
compatible with string, but with slight differences:
- For each constant, there is a corresponding
Literaltype alias for the individual characters. Its name matches the name of the constant, but is singular instead of plural. - Instead of a single string,
optype.stringuses atupleof characters, so that each character has its owntyping.Literalannotation. Note that this is only tested with (based)pyright / pylance, so it might not work with mypy (it has more bugs than it has lines of codes). - The names of the constant are consistent with PEP 8, and use a postfix
notation for variants, e.g.
DIGITS_HEXinstead ofhexdigits. - Unlike
string,optype.stringhas a constant (and type alias) for binary digits'0'and'1';DIGITS_BIN(andDigitBin). Because besidesoctandhexfunctions inbuiltins, there's also thebuiltins.binfunction.
string._ |
optype.string._ |
||
|---|---|---|---|
| constant | char type | constant | char type |
| missing | DIGITS_BIN |
DigitBin |
|
octdigits |
LiteralString |
DIGITS_OCT |
DigitOct |
digits |
DIGITS |
Digit |
|
hexdigits |
DIGITS_HEX |
DigitHex |
|
ascii_letters |
LETTERS |
Letter |
|
ascii_lowercase |
LETTERS_LOWER |
LetterLower |
|
ascii_uppercase |
LETTERS_UPPER |
LetterUpper |
|
punctuation |
PUNCTUATION |
Punctuation |
|
whitespace |
WHITESPACE |
Whitespace |
|
printable |
PRINTABLE |
Printable |
|
Each of the optype.string constants is exactly the same as the corresponding
string constant (after concatenation / splitting), e.g.
>>> import string
>>> import optype as opt
>>> "".join(opt.string.PRINTABLE) == string.printable
True
>>> tuple(string.printable) == opt.string.PRINTABLE
TrueSimilarly, the values within a constant's Literal type exactly match the
values of its constant:
>>> import optype as opt
>>> from optype.inspect import get_args
>>> get_args(opt.string.Printable) == opt.string.PRINTABLE
TrueThe optype.inspect.get_args is a non-broken variant of typing.get_args
that correctly flattens nested literals, type-unions, and PEP 695 type aliases,
so that it matches the official typing specs.
In other words; typing.get_args is yet another fundamentally broken
python-typing feature that's useless in the situations where you need it
most.
Type aliases for anything that can always be passed to
int, float, complex, iter, or typing.Literal
| Python constructor | optype.typing alias |
|---|---|
int(_) |
AnyInt |
float(_) |
AnyFloat |
complex(_) |
AnyComplex |
iter(_) |
AnyIterable |
typing.Literal[_] |
AnyLiteral |
Note
Even though some str and bytes can be converted to int, float,
complex, most of them can't, and are therefore not included in these
type aliases.
These are builtin types or collections that are empty, i.e. have length 0 or yield no elements.
| instance | optype.typing type |
|---|---|
'' |
EmptyString |
b'' |
EmptyBytes |
() |
EmptyTuple |
[] |
EmptyList |
{} |
EmptyDict |
set() |
EmptySet |
(i for i in range(0)) |
EmptyIterable |
| Literal values | optype.typing type |
Notes |
|---|---|---|
{False, True} |
LiteralFalse |
Similar to typing.LiteralString, but for
bool.
|
{0, 1, ..., 255} |
LiteralByte |
Integers in the range 0-255, that make up a bytes
or bytearray objects.
|
A collection of low-level types for working DLPack.
| type signature | bound method |
|---|---|
|
def __dlpack__(
*,
stream: int | None = ...,
max_version: tuple[int, int] | None = ...,
dl_device: tuple[T, D] | None = ...,
copy: bool | None = ...,
) -> types.CapsuleType: ... |
|
def __dlpack_device__() -> tuple[T, D]: ... |
The + prefix indicates that the type parameter is covariant.
There are also two convenient
IntEnums
in optype.dlpack: DLDeviceType for the device types, and DLDataTypeCode for the
internal type-codes of the DLPack data types.
Optype supports both NumPy 1 and 2. The current minimum supported version is 1.25,
following NEP 29 and SPEC 0.
optype.numpy uses numpy-typing-compat package to ensure compatibility for
older versions of NumPy. To ensure that the correct versions of numpy and
numpy-typing-compat are installed, you should install optype with the numpy extra:
pip install "optype[numpy]"If you're using conda, the optype-numpy package can be used, which
will also install the required numpy and numpy-typing-compat versions:
conda install conda-forge::optype-numpyNote
For the remainder of the optype.numpy docs, assume that the following
import aliases are available.
from typing import Any, Literal
import numpy as np
import numpy.typing as npt
import optype.numpy as onpFor the sake of brevity and readability, the PEP 695 and PEP 696 type parameter syntax will be used, which is supported since Python 3.13.
Optype provides the generic onp.Array type alias for np.ndarray.
It is similar to npt.NDArray, but includes two (optional) type parameters:
one that matches the shape type (ND: tuple[int, ...]),
and one that matches the scalar type (ST: np.generic).
When we put the definitions of npt.NDArray and onp.Array side-by-side,
their differences become clear:
|
|
|
|
|---|---|---|
type NDArray[
# no shape type
SCT: generic, # no default
] = ndarray[Any, dtype[SCT]] |
type Array[
NDT: (int, ...) = (int, ...),
SCT: generic = generic,
] = ndarray[NDT, dtype[SCT]] |
type ArrayND[
SCT: generic = generic,
NDT: (int, ...) = (int, ...),
] = ndarray[NDT, dtype[SCT]] |
Additionally, there are the four Array{0,1,2,3}D aliases, which are
equivalent to Array with tuple[()], tuple[int], tuple[int, int] and
tuple[int, int, int] as shape-type, respectively.
Tip
Before NumPy 2.1, the shape type parameter of ndarray (i.e. the type of
ndarray.shape) was invariant. It is therefore recommended to not use Literal
within shape types on numpy<2.1. So with numpy>=2.1 you can use
tuple[Literal[3], Literal[3]] without problem, but with numpy<2.1 you should use
tuple[int, int] instead.
See numpy/numpy#25729 and numpy/numpy#26081 for details.
In the same way as ArrayND for ndarray (shown for reference), its subtypes
np.ma.MaskedArray and np.matrix are also aliased:
|
|
|
|
|---|---|---|
type ArrayND[
SCT: generic = generic,
NDT: (int, ...) = (int, ...),
] = ndarray[NDT, dtype[SCT]] |
type MArray[
SCT: generic = generic,
NDT: (int, ...) = (int, ...),
] = ma.MaskedArray[NDT, dtype[SCT]] |
type Matrix[
SCT: generic = generic,
M: int = int,
N: int = M,
] = matrix[(M, N), dtype[SCT]] |
For masked arrays with specific ndim, you could also use one of the four
MArray{0,1,2,3}D aliases.
To check whether a given object is an instance of Array{0,1,2,3,N}D, in a way that
static type-checkers also understand it, the following PEP 742 typeguards can
be used:
| typeguard | narrows to | shape type |
|---|---|---|
optype.numpy._ |
builtins._ |
|
is_array_nd |
ArrayND[ST] |
tuple[int, ...] |
is_array_0d |
Array0D[ST] |
tuple[()] |
is_array_1d |
Array1D[ST] |
tuple[int] |
is_array_2d |
Array2D[ST] |
tuple[int, int] |
is_array_3d |
Array3D[ST] |
tuple[int, int, int] |
These functions additionally accept an optional dtype argument, that can either be
a np.dtype[ST] instance, a type[ST], or something that has a dtype: np.dtype[ST]
attribute.
The signatures are almost identical to each other, and in the 0d case it roughly
looks like this:
_T = TypeVar("_T", bound=np.generic, default=Any)
_ToDType: TypeAlias = type[_T] | np.dtype[_T] | HasDType[np.dtype[_T]]
def is_array_0d(a, /, dtype: _ToDType[_T] | None = None) -> TypeIs[Array0D[_T]]: ...A shape is nothing more than a tuple of (non-negative) integers, i.e.
an instance of tuple[int, ...] such as (42,), (480, 720, 3) or ().
The length of a shape is often referred to as the number of dimensions
or the dimensionality of the array or scalar.
For arrays this is accessible through the np.ndarray.ndim, which is
an alias for len(np.ndarray.shape).
Note
Before NumPy 2, the maximum number of dimensions was 32, but has since
been increased to ndim <= 64.
To make typing the shape of an array easier, optype provides two families of
shape type aliases: AtLeast{N}D and AtMost{N}D.
The {N} should be replaced by the number of dimensions, which currently
is limited to 0, 1, 2, and 3.
Both of these families are generic, and their (optional) type parameters must
be either int (default), or a literal (non-negative) integer, i.e. like
typing.Literal[N: int].
The names AtLeast{N}D and AtMost{N}D are pretty much as self-explanatory:
AtLeast{N}Dis atuple[int, ...]withndim >= NAtMost{N}Dis atuple[int, ...]withndim <= N
The shape aliases are roughly defined as:
N
|
ndim >= N
|
ndim <= N
| |
|---|---|---|---|
| 0 |
type AtLeast0D = (int, ...) |
type AtMost0D = () | |
| 1 |
type AtLeast1D = (int, *AtLeast0D) |
type AtMost1D = AtMost0D | (int,) | |
| 2 |
type AtLeast2D = (
tuple[int, int]
| AtLeast3D[int]
) |
type AtMost2D = AtMost1D | (int, int) | |
| 3 |
type AtLeast3D = (
tuple[int, int, int]
| tuple[int, int, int, int]
| tuple[int, int, int, int, int]
# etc...
) |
type AtMost3D = AtMost2D | (int, int, int) | |
The AtLeast{}D optionally accepts a type argument that can either be int (default),
or Any. Passing Any turns it from a gradual tuple type, so that they can also be
assigned to compatible bounded shape-types. So AtLeast1D[Any] is assignable to
tuple[int], whereas AtLeast1D (equiv. AtLeast1D[int]) is not.
However, mypy currently has a bug, causing it to falsely reject such gradual shape-type assignment for N=1 or up.
Similar to the numpy._typing._ArrayLike{}_co coercible array-like types,
optype.numpy provides the optype.numpy.To{}ND. Unlike the ones in numpy, these
don't accept "bare" scalar types (the __len__ method is required).
Additionally, there are the To{}1D, To{}2D, and To{}3D for vector-likes,
matrix-likes, and cuboid-likes, and the To{} aliases for "bare" scalar types.
builtins |
numpy |
optype.numpy |
|||
|---|---|---|---|---|---|
| exact scalar types | scalar-like | {1,2,3,N}-d array-like |
strict {1,2,3}-d array-like |
||
False |
False_ |
ToJustFalse |
|||
False| 0
|
False_ |
ToFalse |
|||
True |
True_ |
ToJustTrue |
|||
True| 1
|
True_ |
ToTrue |
|||
bool |
bool_ |
ToJustBool |
ToJustBool{}D |
ToJustBoolStrict{}D |
|
bool| 0| 1
|
bool_ |
ToBool |
ToBool{}D |
ToBoolStrict{}D |
|
~int |
integer |
ToJustInt |
ToJustInt{}D |
ToJustIntStrict{}D |
|
int| bool
|
integer| bool_
|
ToInt |
ToInt{}D |
ToIntStrict{}D |
|
float16 |
ToJustFloat16 |
ToJustFloat16_{}D |
ToJustFloat16Strict{}D |
||
| float16| int8| uint8| bool_
|
ToFloat32 |
ToFloat32_{}D |
ToFloat32Strict{}D |
||
float32 |
ToJustFloat32 |
ToJustFloat32_{}D |
ToJustFloat32Strict{}D |
||
| float32| float16| int16| uint16| int8| uint8| bool_
|
ToFloat32 |
ToFloat32_{}D |
ToFloat32Strict{}D |
||
~float |
float64 |
ToJustFloat64 |
ToJustFloat64_{}D |
ToJustFloat64Strict{}D |
|
float| int| bool
|
float64| float32| float16| integer| bool_
|
ToFloat64 |
ToFloat64_{}D |
ToFloat64Strict{}D |
|
~float |
floating |
ToJustFloat |
ToJustFloat{}D |
ToJustFloatStrict{}D |
|
float| int| bool
|
floating| integer| bool_
|
ToFloat |
ToFloat{}D |
ToFloatStrict{}D |
|
complex64 |
ToJustComplex64 |
ToJustComplex64_{}D |
ToJustComplex64Strict{}D |
||
| complex64| float32| float16| int16| uint16| int8| uint8| bool_
|
ToComplex64 |
ToComplex64_{}D |
ToComplex64Strict{}D |
||
~complex |
complex128 |
ToJustComplex128 |
ToJustComplex128_{}D |
ToJustComplex128Strict{}D |
|
complex| float| int| bool
|
complex128| complex64| float64| float32| float16| integer| bool_
|
ToComplex128 |
ToComplex128_{}D |
ToComplex128Strict{}D |
|
~complex |
complexfloating |
ToJustComplex |
ToJustComplex{}D |
ToJustComplexStrict{}D |
|
complex| float| int| bool
|
number| bool_
|
ToComplex |
ToComplex{}D |
ToComplexStrict{}D |
|
complex| float| int| bool
| bytes| str |
generic |
ToScalar |
ToArray{}D |
ToArrayStrict{}D |
|
Note
The To*Strict{1,2,3}D aliases were added in optype 0.7.3.
These array-likes with strict shape-type require the shape-typed input to be
shape-typed.
This means that e.g. ToFloat1D and ToFloat2D are disjoint (non-overlapping),
and makes them suitable to overload array-likes of a particular dtype for different
numbers of dimensions.
Note
The ToJust{Bool,Float,Complex}* type aliases were added in optype 0.8.0.
See optype.Just for more information.
Note
The To[Just]{False,True} type aliases were added in optype 0.9.1.
These only include the np.bool types on numpy>=2.2. Before that, np.bool
wasn't generic, making it impossible to distinguish between np.False_ and np.True_
using static typing.
Note
The ToArrayStrict{1,2,3}D types are generic since optype 0.9.1, analogous to
their non-strict dual type, ToArray{1,2,3}D.
Note
The To[Just]{Float16,Float32,Complex64}* type aliases were added in optype 0.12.0.
Source code: optype/numpy/_to.py
| Type Alias | String values |
|---|---|
ByteOrder |
ByteOrderChar | ByteOrderName | {L, B, N, I, S} |
ByteOrderChar |
{<, >, =, |} |
ByteOrderName |
{little, big, native, ignore, swap} |
Casting |
CastingUnsafe | CastingSafe |
CastingUnsafe |
{unsafe} |
CastingSafe |
{no, equiv, safe, same_kind} |
ConvolveMode |
{full, same, valid} |
Device |
{cpu} |
IndexMode |
{raise, wrap, clip} |
OrderCF |
{C, F} |
OrderACF |
{A, C, F} |
OrderKACF |
{K, A, C, F} |
PartitionKind |
{introselect} |
SortKind |
{Q, quick[sort], M, merge[sort], H, heap[sort], S, stable[sort]} |
SortSide |
{left, right} |
Compatibility module for supporting a wide range of numpy versions (currently >=1.25).
It contains the abstract numeric scalar types, with numpy>=2.2
type-parameter defaults, which I explained in the release notes.
SPEC 7 -compatible type aliases.
The optype.numpy.random module provides three type aliases: RNG, ToRNG, and
ToSeed.
In general, the most useful one is ToRNG, which describes what can be
passed to numpy.random.default_rng. It is defined as the union of RNG, ToSeed,
and numpy.random.BitGenerator.
The RNG is the union type of numpy.random.Generator and its legacy dual type,
numpy.random.RandomState.
ToSeed accepts integer-like scalars, sequences, and arrays, as well as instances of
numpy.random.SeedSequence.
In NumPy, a dtype (data type) object, is an instance of the
numpy.dtype[ST: np.generic] type.
It's commonly used to convey metadata of a scalar type, e.g. within arrays.
Because the type parameter of np.dtype isn't optional, it could be more
convenient to use the alias optype.numpy.DType, which is defined as:
type DType[ST: np.generic = np.generic] = np.dtype[ST]Apart from the "CamelCase" name, the only difference with np.dtype is that
the type parameter can be omitted, in which case it's equivalent to
np.dtype[np.generic], but shorter.
The optype.numpy.Scalar interface is a generic runtime-checkable protocol,
that can be seen as a "more specific" np.generic, both in name, and from
a typing perspective.
Its type signature looks roughly like this:
type Scalar[
# The "Python type", so that `Scalar.item() -> PT`.
PT: object,
# The "N-bits" type (without having to deal with `npt.NBitBase`).
# It matches the `itemsize: NB` property.
NB: int = int,
] = ...It can be used as e.g.
are_birds_real: Scalar[bool, Literal[1]] = np.bool_(True)
the_answer: Scalar[int, Literal[2]] = np.uint16(42)
alpha: Scalar[float, Literal[8]] = np.float64(1 / 137)Note
The second type argument for itemsize can be omitted, which is equivalent
to setting it to int, so Scalar[PT] and Scalar[PT, int] are equivalent.
A large portion of numpy's public API consists of universal functions, often
denoted as ufuncs, which are (callable) instances of
np.ufunc.
Tip
Custom ufuncs can be created using np.frompyfunc, but also
through a user-defined class that implements the required attributes and
methods (i.e., duck typing).
But np.ufunc has a big issue; it accepts no type parameters.
This makes it very difficult to properly annotate its callable signature and
its literal attributes (e.g. .nin and .identity).
This is where optype.numpy.UFunc comes into play:
It's a runtime-checkable generic typing protocol, that has been thoroughly
type- and unit-tested to ensure compatibility with all of numpy's ufunc
definitions.
Its generic type signature looks roughly like:
type UFunc[
# The type of the (bound) `__call__` method.
Fn: CanCall = CanCall,
# The types of the `nin` and `nout` (readonly) attributes.
# Within numpy these match either `Literal[1]` or `Literal[2]`.
Nin: int = int,
Nout: int = int,
# The type of the `signature` (readonly) attribute;
# Must be `None` unless this is a generalized ufunc (gufunc), e.g.
# `np.matmul`.
Sig: str | None = str | None,
# The type of the `identity` (readonly) attribute (used in `.reduce`).
# Unless `Nin: Literal[2]`, `Nout: Literal[1]`, and `Sig: None`,
# this should always be `None`.
# Note that `complex` also includes `bool | int | float`.
Id: complex | bytes | str | None = float | None,
] = ...Note
Unfortunately, the extra callable methods of np.ufunc (at, reduce,
reduceat, accumulate, and outer), are incorrectly annotated (as None
attributes, even though at runtime they're methods that raise a
ValueError when called).
This currently makes it impossible to properly type these in
optype.numpy.UFunc; doing so would make it incompatible with numpy's
ufuncs.
The Any{Scalar}Array type aliases describe array-likes that are coercible to an
numpy.ndarray with specific dtype.
Unlike numpy.typing.ArrayLike, these optype.numpy aliases don't
accept "bare" scalar types such as float and np.float64. However, arrays of
"zero dimensions" like onp.Array[tuple[()], np.float64] will be accepted.
This is in line with the behavior of numpy.isscalar on numpy >= 2.
import numpy.typing as npt
import optype.numpy as onp
v_np: npt.ArrayLike = 3.14 # accepted
v_op: onp.AnyArray = 3.14 # rejected
sigma1_np: npt.ArrayLike = [[0, 1], [1, 0]] # accepted
sigma1_op: onp.AnyArray = [[0, 1], [1, 0]] # acceptedNote
The numpy.dtypes docs exists since NumPy 1.25, but its
type annotations were incorrect before NumPy 2.1 (see
numpy/numpy#27008)
See the docs for more info on the NumPy scalar type hierarchy.
numpy._ |
optype.numpy._ |
||
|---|---|---|---|
| scalar | scalar base | array-like | dtype-like |
generic |
AnyArray |
AnyDType |
|
number |
generic |
AnyNumberArray |
AnyNumberDType |
integer |
number |
AnyIntegerArray |
AnyIntegerDType |
inexact |
AnyInexactArray |
AnyInexactDType |
|
unsignedinteger |
integer |
AnyUnsignedIntegerArray |
AnyUnsignedIntegerDType |
signedinteger |
AnySignedIntegerArray |
AnySignedIntegerDType |
|
floating |
inexact |
AnyFloatingArray |
AnyFloatingDType |
complexfloating |
AnyComplexFloatingArray |
AnyComplexFloatingDType |
|
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
|
|
unsignedinteger |
AnyUIntArray |
AnyUIntDType |
|
uintp |
AnyUIntPArray |
AnyUIntPDType |
||
uint8, ubyte |
UInt8DType |
AnyUInt8Array |
AnyUInt8DType |
|
uint16, ushort |
UInt16DType |
AnyUInt16Array |
AnyUInt16DType |
|
|
|
UInt32DType |
AnyUInt32Array |
AnyUInt32DType |
|
uint64 |
UInt64DType |
AnyUInt64Array |
AnyUInt64DType |
|
|
|
UIntDType |
AnyUIntCArray |
AnyUIntCDType |
|
|
|
ULongDType |
AnyULongArray |
AnyULongDType |
|
ulonglong |
ULongLongDType |
AnyULongLongArray |
AnyULongLongDType |
|
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
|
|
signedinteger |
AnyIntArray |
AnyIntDType |
|
intp |
AnyIntPArray |
AnyIntPDType |
||
int8, byte |
Int8DType |
AnyInt8Array |
AnyInt8DType |
|
int16, short |
Int16DType |
AnyInt16Array |
AnyInt16DType |
|
|
|
Int32DType |
AnyInt32Array |
AnyInt32DType |
|
int64 |
Int64DType |
AnyInt64Array |
AnyInt64DType |
|
|
|
IntDType |
AnyIntCArray |
AnyIntCDType |
|
|
|
LongDType |
AnyLongArray |
AnyLongDType |
|
longlong |
LongLongDType |
AnyLongLongArray |
AnyLongLongDType |
|
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
float16,half
|
np.floating |
Float16DType |
AnyFloat16Array |
AnyFloat16DType |
float32,single
|
Float32DType |
AnyFloat32Array |
AnyFloat32DType |
|
float64,double
|
np.floating &builtins.float
|
Float64DType |
AnyFloat64Array |
AnyFloat64DType |
|
|
np.floating |
LongDoubleDType |
AnyLongDoubleArray |
AnyLongDoubleDType |
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
complex64,csingle
|
complexfloating |
Complex64DType |
AnyComplex64Array |
AnyComplex64DType |
complex128,cdouble
|
complexfloating &builtins.complex
|
Complex128DType |
AnyComplex128Array |
AnyComplex128DType |
|
|
complexfloating |
CLongDoubleDType |
AnyCLongDoubleArray |
AnyCLongDoubleDType |
Scalar types with "flexible" length, whose values have a (constant) length
that depends on the specific np.dtype instantiation.
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
str_ |
character |
StrDType |
AnyStrArray |
AnyStrDType |
bytes_ |
BytesDType |
AnyBytesArray |
AnyBytesDType |
|
dtype("c") |
AnyBytes8DType |
|||
void |
flexible |
VoidDType |
AnyVoidArray |
AnyVoidDType |
numpy._ |
numpy.dtypes._ |
optype.numpy._ |
||
|---|---|---|---|---|
| scalar | scalar base | dtype | array-like | dtype-like |
|
|
generic |
BoolDType |
AnyBoolArray |
AnyBoolDType |
object_ |
ObjectDType |
AnyObjectArray |
AnyObjectDType |
|
datetime64 |
DateTime64DType |
AnyDateTime64Array |
AnyDateTime64DType |
|
timedelta64 |
|
TimeDelta64DType |
AnyTimeDelta64Array |
AnyTimeDelta64DType |
StringDType |
AnyStringArray |
AnyStringDType |
||
Within optype.numpy there are several Can* (single-method) and Has*
(single-attribute) protocols, related to the __array_*__ dunders of the
NumPy Python API.
These typing protocols are, just like the optype.Can* and optype.Has* ones,
runtime-checkable and extensible (i.e. not @final).
Tip
All type parameters of these protocols can be omitted, which is equivalent to passing its upper type bound.
| Protocol type signature | Implements | NumPy docs |
|---|---|---|
class CanArray[
ND: tuple[int, ...] = ...,
ST: np.generic = ...,
]: ... |
def __array__[RT = ST](
_,
dtype: DType[RT] | None = ...,
) -> Array[ND, RT] |
|
class CanArrayUFunc[
U: UFunc = ...,
R: object = ...,
]: ... |
def __array_ufunc__(
_,
ufunc: U,
method: LiteralString,
*args: object,
**kwargs: object,
) -> R: ... |
|
class CanArrayFunction[
F: CanCall[..., object] = ...,
R = object,
]: ... |
def __array_function__(
_,
func: F,
types: CanIterSelf[type[CanArrayFunction]],
args: tuple[object, ...],
kwargs: Mapping[str, object],
) -> R: ... |
|
class CanArrayFinalize[
T: object = ...,
]: ... |
def __array_finalize__(_, obj: T): ... |
|
class CanArrayWrap: ... |
def __array_wrap__[ND, ST](
_,
array: Array[ND, ST],
context: (...) | None = ...,
return_scalar: bool = ...,
) -> Self | Array[ND, ST] |
|
class HasArrayInterface[
V: Mapping[str, object] = ...,
]: ... |
__array_interface__: V |
|
class HasArrayPriority: ... |
__array_priority__: float |
|
class HasDType[
DT: DType = ...,
]: ... |
dtype: DT |
|
Footnotes
-
Since
numpy>=2.2theNDArrayalias usestuple[int, ...]as shape-type instead ofAny. ↩ -
Since NumPy 2,
np.uintandnp.int_are aliases fornp.uintpandnp.intp, respectively. ↩ ↩2 -
On unix-based platforms
np.[u]intcare aliases fornp.[u]int32. ↩ ↩2 ↩3 ↩4 -
On NumPy 1
np.uintandnp.int_are what in NumPy 2 are now thenp.ulongandnp.longtypes, respectively. ↩ ↩2 -
Depending on the platform,
np.longdoubleis (almost always) an alias for eitherfloat128,float96, or (sometimes)float64. ↩ -
Depending on the platform,
np.clongdoubleis (almost always) an alias for eithercomplex256,complex192, or (sometimes)complex128. ↩ -
Since NumPy 2,
np.boolis preferred overnp.bool_, which only exists for backwards compatibility. ↩ -
At runtime
np.timedelta64is a subclass ofnp.signedinteger, but this is currently not reflected in the type annotations. ↩ -
The
np.dypes.StringDTypehas no associated numpy scalar type, and its.typeattribute returns thebuiltins.strtype instead. But from a typing perspective, such anp.dtype[builtins.str]isn't a valid type. ↩