Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@nicholasloveday
Copy link
Collaborator

Speed up of >150x on sample data that is an array of 100x100x5 with 500 members.

ens_count = fcst.count(ensemble_member_dim) # Get M (number of non-NaN members for each point)
M_total = fcst.sizes[ensemble_member_dim] # Get M_total (total dimension size)

if M_total > 0:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs some more comments to explain why this is valid/equivalent to the unsorted method. This looks like a lot more than just sorting, also introducing co-efficients which for some reason and some kind of basic explanation of what's happening should be added to orient the reader.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would be useful to have a reference to the computational method in the comment below. Depending on the quality of the reference, that might be enough to satisfy me that the method is justified (not that I doubt your work Nick). Otherwise I'm in the position of having to prove the equivalence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants