-
-
Notifications
You must be signed in to change notification settings - Fork 7.9k
FIX: Allow deepcopy on norms and scales #18126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This seems like too heavy handed fix? We are trying to fix the the deep-copy norms but are enabling deep copying all of the scales. The scales participate in the transform stack getting from data -> screen so I'm not sure that making just some of the things in the tree deep-copy-able is the right choice. |
Well, the problem is not deep copying norms, per-se, its deep copying norms that are derived from scales, which is currently just The point is that the scales are at the end of the transform tree, so locking their transforms out of The hack-ier way to fix this is to write a |
Haven't had a chance to look at this yet, but I think this should be targeted to 3.4 (which #16457) is targeted for and not 3.3. |
I don't know whether this is related but I had this issue |
@tacaswell can you weigh in again on this if you are still concerned? |
Looking at things again (it's been a while), I am actually slightly confused why copying and deepcopying of transforms is forbidden in general (perhaps I completely missed the point, in which case I apologize in advance). In #18119 (comment) @tacaswell wrote
but actually AFAICT it is the modified node that is in charge of notifying its dependents ("parents") that it has been modified and that they (the dependents) need to be invalidated. IOW, if (*) that's the normal semantics of copying: if you have |
ed82fab
to
772e1a2
Compare
lib/matplotlib/scale.py
Outdated
@@ -79,6 +79,18 @@ def limit_range_for_scale(self, vmin, vmax, minpos): | |||
return vmin, vmax | |||
|
|||
|
|||
class _TransformScaleCopyable(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
class _TransformScaleCopyable(): | |
class _CopyableTransfromMixin(): | |
"""Mixin to support deep copy on a transfrom.""" |
I don't think we need "scale" in here. It's just the copy feature that we provide and doesn't have to say anything on who may use it.
While historically, we did not call mixins *Mixin
, I think it's good style to do so, and there is precedence with _BlendedMixin
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sure, plus or minus spelling ;-)
772e1a2
to
f7baacb
Compare
That unfortunately broke docs since the mixin became undocumented. |
I guess there is no fundamental reason for it not to be public? |
f7baacb
to
01049f1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Flagging this because it doesn't match my understanding of what we agreed was best on the call today. Worst case I'm wrong, but better safe than sorry.
That said it's a pretty simple fix.
lib/matplotlib/scale.py
Outdated
class LinearTransform(CopyableTransformMixin, IdentityTransform): | ||
pass |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry to nitpick about this, but I think it makes a lot more sense to instead have
class IdentityTransform(CopyableTransformMixin, Transform)
in transform.py
. Then you can just return IdentityTransform
below and not have to create the LinearTransform
class at all. That was my take away from the call today, but @timhoffm approved already so I may be missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the difference between this and doing it in scale.py where it is needed? We really don't particularly want people using a special version of transform.Identity
do we?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I emphasized the wrong thing. the point is to not make a special Transform class at all, but instead to add copy and deepcopy ability to the real IdentityTransform
(using the mix-in). Then you don't need the extra class at all, I think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure I understand. But my point is that all the other scales have a named transform associated with them, I think its fine for the linear scale to have a (trivial) LinearTransform
that is associated with it. The advantage is that the goofy mixin is confined to scale.py
, and we don't pollute transform.py
.
The discussion about IdentityTransform
on the call was pointing out that it definitely didn't need to have a stricture against copying, but to my mind that just goes to @anntzer 's overall point that copying all the transforms could be reconsidered. But that is a 3.5 sort of thing, not a release-critical 3.4 fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @brunobeltran (@brunobeltran thanks for catching this. I was too concerned with naming the mixin 😄).
We have scales and transforms that are associated with it. For nonlinear scales we need special transforms and it's natural that a LogScale is connected to a LogTransform. However, for a linear scale we don't need a transform (or only the identity). Making a LinearTransform
, which is effectively an IdentityTransform
only for the sake of matching the name is more confusing than helpful (that's also what got me confused in the first place in yesterdays discussion).
Note also the docstring of LinearScale.get_transform()
:
Return the transform for linear scaling, which is just the `~matplotlib.transforms.IdentityTransform`.
Given that te mixin is private again, we're not polluting the namespace of transform.py
if we put it there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure but it is strange to have a mixin that we use once in the transform module and a bunch of times in the scale module. Particularly when the mixin is really only needed for the scales to work. I think it just makes things unnecessarily complicated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think which is more complicated is in the eye of the beholder. I personally find the purpose of the mix-in to be clear from the name. On the other hand, an equivalent solution (that it sounds like you like better) would be to just have:
(in transforms.py):
class _CopyableTransformMixin():
...
class IdentityTransform(Affine2DBase, _CopyableTransformMixin):
...
class _CopyableTransform(Transform, _CopyableTransformMixin):
pass
then in scale.py
, we simply do
from matplotlib.transforms import ..., _CopyableTransform
class FuncTransform(_CopyableTransform):
...
Then there's not a mixin scattered over two files, the semantics of what's going on are confined to just transforms.py
, and what we get is that IdentityTrasnform
now copies correctly, and no users are confused about what this "LinearTransform" object is that quacks like IdentityTransform but has a different name.
(Bonus) whoever ends up making all Transforms copy-able in the future has a nice framework to start working within.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the current solution better where nothing changes in transform.py
and all the changes are in scale.py
.
I don't think this is the correct framework for any fundamental changes to Transform
that allow it to be copyable in general. That will require substantial more state checking etc that are beyond a simple mixin.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like to keep this with no additional public API for now (other than supporting the deepcopy on norms and scales). In particular, I don't want to introduce a funky LinearTransform
that is basically an IdentityTransform
.
That implies we have to make IdentityTransform
itself copyable, i.e. there has to be a change to transform.py
. Thogh it can all be private. I have no strong preference concerning mixin into all scale transforms vs. a _CopyableTransform
base class (partly also because it can all be kept private and so we can change that should the need arise).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I made IdentityTranform
copyable by adding a __deepcopy__
(and __copy__
) to it, and removed LinearTransform
from transform.py
713e675
to
4699c0c
Compare
I don't think the test failures are this PR. I can rerun after review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modulo the style comment.
0cac039
to
a7439b6
Compare
a7439b6
to
e152f02
Compare
@brunobeltran you are still blocking this. I believe your concerns are addressed. |
Sorry for being late to the party, but perhaps #19281 would be more general? |
Put on hold until #19281 gets decided. |
@anntzer does it make sense to get this in for 3.4 and then leave your more comprehensive change for a future release? I can only imagine testing it is going to be a bit of a challenge. |
I'll try to join the call tomorrow and we can discuss then? |
Superseeded by #19281. |
PR Summary
Closes #18119
I guess its an API change to change the return type of
scale.linear.get_transform
?This needs tests but the following now works instead of raising an error.
PR Checklist