-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
Big speed up in searchsorted if second input is also sorted #10937
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
+1 on adding |
The function has the comment
The way it works is to do binary search with the modification that the first trial is the last success rather than the center of the whole array. When the keys are sorted, the search range decreases as the keys move upwards. |
On adding |
I see so unless I'm missing something this is still a suboptimal speedup -- i.e. it does not reduce the runtime from |
Note that a @charris - For anyone like me misreading your comment, I should clarify that function we're proposing is not "(merge sort)ed" but "merge (sorted)". Clearly that name is suboptimal! |
Agreed that the effect is unexpectedly large, my back of the envelope calculation, which may be way off, gives me an expected speedup of ~7%. I'm going to guess what is going on is a pattern of memory accesses that doesn't change much search to search, so cache may come into play. For sorted keys one can also proceed by taking increasingly large jumps upward, followed by binary search. |
@eric-wieser The merge sorts do merge_sorted :) |
What is actually providing the speedup here is the reduction in branch misses. With this dense list of keys to search the wanted next key is usually very close to the beginning of the remaining search space so the branch results are mostly going the same direction. |
the optimization of reducing the search space is actually harmful for the unsorted performance here, we knew these cases exist when it was implemented, but at the time the binsearch was also untyped so in total the performance was still better for all cases. |
I have been leaning on
searchsorted
heavily for one of my projects and was playing around with how to speed things up. I found out that pre-sorting the second input for large arrays greatly speeds up the computation (even taking into account the up front cost of sorting).I was surprised because from the documentation it didn't seem like
searchsorted
made any assumptions about whethery
was sorted or not. Is this speedup purely because data locality is better after sortingy
? Or is there actually an algorithmic reason for this speed up? I tried digging into the C code but couldn't follow where this would be implemented.Would it make sense to note and explain this behavior in the documentation? Would it make sense to add an additional argument to
searchsorted(x, y, both_sorted=True)
, or something likemerge_sorted(x, y)
which assumes bothx
andy
are sorted?See also: https://stackoverflow.com/questions/27916710/numpy-merge-sorted-array-to-an-new-array
The text was updated successfully, but these errors were encountered: