-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[MRG] Modify svm/plot_separating_hyperplane.py example for matplotlib v2 #8369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Modify svm/plot_separating_hyperplane.py example for matplotlib v2 #8369
Conversation
I updated your description, this should not close #8364 which is a generic issue about all the examples. Your PR is about one of the examples. |
I would advise you to disable CircleCI on your fork unless you have a very good reason not to. There are a variety of reasons for this that I am not going to go into. One is that we have convenient ways to find the generated documentation that do not work if CircleCI is enabled on your fork. |
For future reference here are the example links for the stable doc, dev doc and this PR doc. |
@@ -12,12 +12,13 @@ | |||
import numpy as np | |||
import matplotlib.pyplot as plt | |||
from sklearn import svm | |||
|
|||
from sklearn.datasets import make_classification |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should probably not change the data unless you have a very good reason to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah OK I missed that, thanks.
I modified the title of this PR and description because I am also go through some more examples which needed to change due to matplotlib up-gradation. Matplotlib v2.0 by default support |
Small PRs are way easier to review and merge. Unless the change is a search-and-replace kind of thing (which I don't think it is and on top of it each example will require visual inspection to make sure that everything works fine), I would be in favour of doing it one example at a time. |
@lesteve yeah you are right, I do like that . Thanks |
X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]] | ||
Y = [0] * 20 + [1] * 20 | ||
|
||
#X = np.r_[np.random.randn(20, 2) - [2, 2], np.random.randn(20, 2) + [2, 2]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove this
Codecov Report
@@ Coverage Diff @@
## master #8369 +/- ##
==========================================
+ Coverage 94.75% 95.48% +0.72%
==========================================
Files 342 342
Lines 60809 60913 +104
==========================================
+ Hits 57617 58160 +543
+ Misses 3192 2753 -439
Continue to review full report at Codecov.
|
Plot using contour levels on decision functions
ylim = ax.get_ylim() | ||
|
||
# create grid to evaluate model | ||
x = np.linspace(xlim[0], xlim[1], 30) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should really have a function for this but that's another issue #6338
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not merge yet.
|
||
X, Y = make_classification(n_features=2, n_redundant=0, n_informative=1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You should use y
here. The scikit-learn convention is that variable names starting with a capital letter should be reserved for 2d arrays.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use a random_state
argument otherwise the plot will change each time you run the example.
plt.scatter(clf.support_vectors_[:, 0], clf.support_vectors_[:, 1], | ||
s=80, facecolors='none') | ||
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired) | ||
plt.scatter(X[:, 0], X[:, 1], c=Y, s=50, cmap='autumn', edgecolors='k') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not change something unless there is a very good reason to. In this case, I would keep the plt.cm.Paired
colormap.
Reference Issue
#8364
What does this implement/fix? Explain your changes.
Modify scikit-learn examples for better looking plots in matplotlib v2
Any other comments?