Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Big speed up in searchsorted if second input is also sorted #10937

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ahwillia opened this issue Apr 20, 2018 · 9 comments
Open

Big speed up in searchsorted if second input is also sorted #10937

ahwillia opened this issue Apr 20, 2018 · 9 comments

Comments

@ahwillia
Copy link

I have been leaning on searchsorted heavily for one of my projects and was playing around with how to speed things up. I found out that pre-sorting the second input for large arrays greatly speeds up the computation (even taking into account the up front cost of sorting).

x = np.random.randn(5000000)
y = np.random.randn(5000000)
x.sort()

%time np.searchsorted(x, y)
CPU times: user 10.3 s, sys: 36.7 ms, total: 10.4 s
Wall time: 10.4 s

%time y.sort(); np.searchsorted(x, y)
CPU times: user 959 ms, sys: 12.4 ms, total: 971 ms
Wall time: 971 ms

I was surprised because from the documentation it didn't seem like searchsorted made any assumptions about whether y was sorted or not. Is this speedup purely because data locality is better after sorting y? Or is there actually an algorithmic reason for this speed up? I tried digging into the C code but couldn't follow where this would be implemented.

Would it make sense to note and explain this behavior in the documentation? Would it make sense to add an additional argument to searchsorted(x, y, both_sorted=True), or something like merge_sorted(x, y) which assumes both x and y are sorted?

See also: https://stackoverflow.com/questions/27916710/numpy-merge-sorted-array-to-an-new-array

@eric-wieser
Copy link
Member

+1 on adding merge_sorted - I've found myself wanting it inside the histogram implementation

@charris
Copy link
Member

charris commented Apr 20, 2018

The function has the comment

        /*
         * Updating only one of the indices based on the previous key
         * gives the search a big boost when keys are sorted, but slightly
         * slows down things for purely random ones.
         */

The way it works is to do binary search with the modification that the first trial is the last success rather than the center of the whole array. When the keys are sorted, the search range decreases as the keys move upwards.

@charris
Copy link
Member

charris commented Apr 20, 2018

On adding merge_sorted, the needed code can be taken from the merge_sort implementations.

@ahwillia
Copy link
Author

The way it works is to do binary search with the modification that the first trial is the last success rather than the center of the whole array. When the keys are sorted, the search range decreases as the keys move upwards.

I see so unless I'm missing something this is still a suboptimal speedup -- i.e. it does not reduce the runtime from O(n log n) to O(n). I'm surprised that it still has such a big effect, but maybe because in my example x and y were the same length?

@eric-wieser
Copy link
Member

eric-wieser commented Apr 21, 2018

Note that a merge_sorted(a, b) implementation would be O(Na + Nb), whereas search_sorted (without the optimization above is O(Na log Nb). For large Nb, search_sorted is faster than the merge approach.

@charris - For anyone like me misreading your comment, I should clarify that function we're proposing is not "(merge sort)ed" but "merge (sorted)". Clearly that name is suboptimal!

@charris
Copy link
Member

charris commented Apr 21, 2018

Agreed that the effect is unexpectedly large, my back of the envelope calculation, which may be way off, gives me an expected speedup of ~7%. I'm going to guess what is going on is a pattern of memory accesses that doesn't change much search to search, so cache may come into play. For sorted keys one can also proceed by taking increasingly large jumps upward, followed by binary search.

@charris
Copy link
Member

charris commented Apr 21, 2018

@eric-wieser The merge sorts do merge_sorted :)

@juliantaylor
Copy link
Contributor

What is actually providing the speedup here is the reduction in branch misses. With this dense list of keys to search the wanted next key is usually very close to the beginning of the remaining search space so the branch results are mostly going the same direction.
As searching is all just comparing and branching, allowing the cpu to correctly predict most branches greatly improves performance.

@juliantaylor
Copy link
Contributor

the optimization of reducing the search space is actually harmful for the unsorted performance here, we knew these cases exist when it was implemented, but at the time the binsearch was also untyped so in total the performance was still better for all cases.
It might be worthwhile to tune this code again a bit for unsorted keys.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants