-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
Description
I have been leaning on searchsorted heavily for one of my projects and was playing around with how to speed things up. I found out that pre-sorting the second input for large arrays greatly speeds up the computation (even taking into account the up front cost of sorting).
x = np.random.randn(5000000)
y = np.random.randn(5000000)
x.sort()
%time np.searchsorted(x, y)
CPU times: user 10.3 s, sys: 36.7 ms, total: 10.4 s
Wall time: 10.4 s
%time y.sort(); np.searchsorted(x, y)
CPU times: user 959 ms, sys: 12.4 ms, total: 971 ms
Wall time: 971 msI was surprised because from the documentation it didn't seem like searchsorted made any assumptions about whether y was sorted or not. Is this speedup purely because data locality is better after sorting y? Or is there actually an algorithmic reason for this speed up? I tried digging into the C code but couldn't follow where this would be implemented.
Would it make sense to note and explain this behavior in the documentation? Would it make sense to add an additional argument to searchsorted(x, y, both_sorted=True), or something like merge_sorted(x, y) which assumes both x and y are sorted?
See also: https://stackoverflow.com/questions/27916710/numpy-merge-sorted-array-to-an-new-array