Thanks to visit codestin.com
Credit goes to github.com

Skip to content

PRF: Don't used MaskedArray in Aitoff transform. #9862

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Nov 28, 2017
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 9 additions & 10 deletions lib/matplotlib/projections/geo.py
Original file line number Diff line number Diff line change
Expand Up @@ -269,24 +269,23 @@ def __init__(self, resolution):
self._resolution = resolution

def transform_non_affine(self, ll):
longitude = ll[:, 0:1]
latitude = ll[:, 1:2]
longitude = ll[:, 0]
latitude = ll[:, 1]

# Pre-compute some values
half_long = longitude / 2.0
cos_latitude = np.cos(latitude)

alpha = np.arccos(cos_latitude * np.cos(half_long))
# Mask this array or we'll get divide-by-zero errors
alpha = ma.masked_where(alpha == 0.0, alpha)
# The numerators also need to be masked so that masked
# division will be invoked.
# Avoid divide-by-zero errors using same method as NumPy.
alpha[alpha == 0.0] = 1e-20
Copy link
Contributor

@anntzer anntzer Nov 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

np.maximum(alpha, 1e-20, out=alpha) should work better if you somehow have tiny alphas (between 0 and 1e-20) and also saves an extra allocation, I think.
(or alpha = np.maximum(alpha, 1e-20) if you don't want to obfuscate it :-))

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fast either way, but np.maximum takes more than twice as long, at least in my test with 10,000 pts. I think the 1e-20 is a completely arbitrary small number, and its only purpose is to prevent division by exactly zero. A tiny number still works:

In [21]: np.sin(1e-300) / 1e-300
1.0

# We want unnormalized sinc. numpy.sinc gives us normalized
sinc_alpha = ma.sin(alpha) / alpha
sinc_alpha = np.sin(alpha) / alpha

x = (cos_latitude * ma.sin(half_long)) / sinc_alpha
y = (ma.sin(latitude) / sinc_alpha)
return np.concatenate((x.filled(0), y.filled(0)), 1)
xy = np.empty_like(ll, float)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like column_stack, which I think has a very descriptive name... (but it's just personal preference)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pre-allocating and then assigning to slices is faster, and still very readable.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's essentially the same speed (not that it really matters):

In [15]: %%timeit x = np.random.rand(10000); y = np.random.rand(10000)
    ...: z = np.column_stack([x, y])
    ...: 
37.8 µs ± 216 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)

In [16]: %%timeit x = np.random.rand(10000); y = np.random.rand(10000)
    ...: z = np.empty((10000, 2))
    ...: z[:, 0] = x; z[:, 1] = y
    ...: 
36.7 µs ± 1.39 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

xy[:, 0] = (cos_latitude * np.sin(half_long)) / sinc_alpha
xy[:, 1] = np.sin(latitude) / sinc_alpha
return xy
transform_non_affine.__doc__ = Transform.transform_non_affine.__doc__

def transform_path_non_affine(self, path):
Expand Down