-
-
Notifications
You must be signed in to change notification settings - Fork 26.5k
[MRG+1] Add OneVs{One,All}Classifier._pairwise: fix for #7306 #7350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
sklearn/multiclass.py
Outdated
|
|
||
| @property | ||
| def _pairwise(self): | ||
| '''Indicate if wrapped estimator is using a precomputed Gram matrix''' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use double quotes """ please.
|
Just fixed the single to double quotes |
sklearn/tests/test_multiclass.py
Outdated
| clf_precomputed = svm.SVC(kernel='precomputed') | ||
| clf_notprecomputed = svm.SVC() | ||
|
|
||
| ovrFalse = OneVsRestClassifier(clf_notprecomputed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for MultiClassClassifier in [OneVsRestClassifier, OneVsOneClassifier]:?
|
is it clear what I mean with the test for |
|
Not exactly, we crossposted though. Something similar to the test I linked to ? |
|
Okay, so I added another test which checks the cross_val_score with precomputed vs linear kernels, but this only works for
|
sklearn/tests/test_multiclass.py
Outdated
|
|
||
| # for MultiClassClassifier in [OneVsRestClassifier, OneVsOneClassifier]: | ||
| for MultiClassClassifier in [OneVsRestClassifier]: | ||
| ovrFalse = MultiClassClassifier(clf_notprecomputed) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please avoid camel case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(unless it's a class name)
|
Not sure where you're saying tests fail: it looks like tests are passing except the PEP8 check, which may be merely due to your print statements. |
|
They all succeed as is, but I have commented out The failure happens with the OneVsOneClassifier. |
|
oh, right. |
regarding the error, could you leave the broken test in or else report the full traceback so we don't necessarily need to go run it to help you debug? |
|
Hi, yes, I have reincluded it. |
sklearn/tests/test_multiclass.py
Outdated
|
|
||
| for MultiClassClassifier in [OneVsRestClassifier, OneVsOneClassifier]: | ||
| ovrFalse = MultiClassClassifier(clf_notprecomputed) | ||
| assert_false(ovrFalse._pairwise) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
camelCase
|
Right. I've never touched the 1v1 code before. Here 1v1 pulls out only some samples from X without respect to |
|
I tried to play around with _safe_split, but I'm not exactly sure how to change that code to use it properly. Any hints? Thanks! |
|
https://github.com/scikit-learn/scikit-learn/blob/5305861/sklearn/multiclass.py#L403 should be changed to be something like return _fit_binary(estimator, _safe_split(estimator, X, y, indices=ind[cond])[0], y_binary, classes=[i, j])ideally we wouldn't duplicate work with |
|
Okay, so I had tried something similar (at least it gave me the same error as we now see. It breaks a bunch of the other tests as well, always in these lines : |
|
error type and message? |
|
Sorry cut it out, see above. |
sklearn/multiclass.py
Outdated
| ind = np.arange(X.shape[0]) | ||
| return _fit_binary(estimator, X[ind[cond]], y_binary, classes=[i, j]) | ||
|
|
||
| return _fit_binary(estimator, _safe_split(estimator, X, y, indices=ind[cond])[0], y_binary, classes=[i, j]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, my mistake. you shouldn't be passing y into _safe_split. Rather, _safe_split should probably allow y=None.
|
Okay, so that fixes the other tests, but our new test fails, again only on the
|
|
I guess looking at it a bit more that we need to add the same |
|
By not moving |
|
I don't think it belongs in |
|
Arguably, Recall, however, that |
|
Yeah, okay that makes a lot of sense. I'm happy to move everything over to |
|
I don't think it's relevant to the prediction logic. On 13 September 2016 at 13:11, Russell Smith [email protected]
|
|
If you're lucky, the next review will just be a double-check with no work from you. |
ogrisel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apart from the following comments, this LGTM.
| K = np.dot(X, X.T) | ||
|
|
||
| cv = ShuffleSplit(test_size=0.25, random_state=0) | ||
| tr, te = list(cv.split(X))[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please use more explicit variable names? E.g. train_indices and test_indices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify, this is moved code, not new code. Still, it might be a good idea to improve it, as it's easy to do so.
|
|
||
| X_tr, y_tr = _safe_split(clf, X, y, tr) | ||
| K_tr, y_tr2 = _safe_split(clfp, K, y, tr) | ||
| assert_array_almost_equal(K_tr, np.dot(X_tr, X_tr.T)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also check that y_tr (to be renamed y_train) and y_tr2 (to be renamed y_train2) are equal.
|
|
||
| X_te, y_te = _safe_split(clf, X, y, te, tr) | ||
| K_te, y_te2 = _safe_split(clfp, K, y, te, tr) | ||
| assert_array_almost_equal(K_te, np.dot(X_te, X_tr.T)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar update required here.
| else: | ||
| y_subset = None | ||
|
|
||
| return X_subset, y_subset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why has this been moved to metaestimators.py instead of keeping it in sklearn/model_selection/_split.py ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is now used in sklearn.multiclass and sklearn.model_selection. Do you think it belongs in model_selection? Do you think it would be better off in sklearn.utils.__init__ than sklearn.utils.metaestimators?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sklearn.model_selection._split seemed like a good module to host a private helper function named _safe_split. But I don't care that much.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sklearn.model_selection._split seemed like a good module to host a private helper function named _safe_split. But I don't care that much.
It was, but I think the name is not quite right. It's just harder to come up with a better one: _pairwise_friendly_indexing?
sklearn/multiclass.py
Outdated
| def __init__(self, estimator, n_jobs=1): | ||
| self.estimator = estimator | ||
| self.n_jobs = n_jobs | ||
| self.pairwise_indices_ = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The constructor of a scikit-learn estimators should never set attributes with a trailing _, it should only store hyperparameters as attributes. Attributes with a trailing _ should only be set by the fit method or by a private submethod called only at fit time.
At test time, methods that need access to that attribute can check its presence with the _check_fitted_model helper.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I thought we had a consistency check in test_common for this kind of things but maybe it's not applied on meta estimators.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somehow I missed this, sorry.
sklearn/multiclass.py
Outdated
| y : numpy array of shape [n_samples] | ||
| Predicted multi-class targets. | ||
| """ | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why the new line here?
| if indices is None: | ||
| Xs = [X] * len(self.estimators_) | ||
| else: | ||
| Xs = [X[:, idx] for idx in indices] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this case tested? If not please add a dedicated test in test_multiclass.py with SVC(kernel='precomputed') and check the expected shape of the output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me know if the test test_pairwise_indices is what you are looking for here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like a to have a test that checks the call to decision_function method on OvO & OvR wrapped models fit on precomputed kernel.
doc/whats_new.rst
Outdated
|
|
||
| - Cross-validation of :class:`OneVsOneClassifier` and | ||
| :class:`OneVsRestClassifier` now works with precomputed kernels. | ||
| (`#7350 <https://github.com/scikit-learn/scikit-learn/pull/7350/>`_) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indentation
| K = np.dot(X, X.T) | ||
|
|
||
| cv = ShuffleSplit(test_size=0.25, random_state=0) | ||
| tr, te = list(cv.split(X))[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to clarify, this is moved code, not new code. Still, it might be a good idea to improve it, as it's easy to do so.
| else: | ||
| y_subset = None | ||
|
|
||
| return X_subset, y_subset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is now used in sklearn.multiclass and sklearn.model_selection. Do you think it belongs in model_selection? Do you think it would be better off in sklearn.utils.__init__ than sklearn.utils.metaestimators?
Even I feel this should reside inside |
I don't get why it should belong exclusively in model selection. It pertains to anything that indexes on |
I think the naming, signature and the intended use are not that generic enough. At least a generic version of such a function should may be called |
|
I made the changes requested by @ogrisel , other than those related to |
doc/whats_new.rst
Outdated
| :class:`OneVsRestClassifier` now works with precomputed kernels. | ||
| (`#7350 <https://github.com/scikit-learn/scikit-learn/pull/7350/>`_) | ||
| (`#7350 <https://github.com/scikit-learn/scikit-learn/pull/7350/>`_) | ||
| by `Russell Smith <https://github.com/rsmith54>`_. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also add your name to the bottom as we are expecting more amazing pull requests like these from you ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that you have added it there you can remove the link here... :)
As you wish. You can leave it where it is. |
|
Thanks @rsmith54! |
|
This proved somewhat trickier than was first thought, so big thanks On 20 September 2016 at 21:50, Raghav RV [email protected] wrote:
|
|
Thanks for all the help! |
--------------------
* ENH Reogranize classes/fn from grid_search into search.py
* ENH Reogranize classes/fn from cross_validation into split.py
* ENH Reogranize cls/fn from cross_validation/learning_curve into validate.py
* MAINT Merge _check_cv into check_cv inside the model_selection module
* MAINT Update all the imports to point to the model_selection module
* FIX use iter_cv to iterate throught the new style/old style cv objs
* TST Add tests for the new model_selection members
* ENH Wrap the old-style cv obj/iterables instead of using iter_cv
* ENH Use scipy's binomial coefficient function comb for calucation of nCk
* ENH Few enhancements to the split module
* ENH Improve check_cv input validation and docstring
* MAINT _get_test_folds(X, y, labels) --> _get_test_folds(labels)
* TST if 1d arrays for X introduce any errors
* ENH use 1d X arrays for all tests;
* ENH X_10 --> X (global var)
Minor
-----
* ENH _PartitionIterator --> _BaseCrossValidator;
* ENH CVIterator --> CVIterableWrapper
* TST Import the old SKF locally
* FIX/TST Clean up the split module's tests.
* DOC Improve documentation of the cv parameter
* COSMIT consistently hyphenate cross-validation/cross-validator
* TST Calculate n_samples from X
* COSMIT Use separate lines for each import.
* COSMIT cross_validation_generator --> cross_validator
Commits merged manually
-----------------------
* FIX Document the random_state attribute in RandomSearchCV
* MAINT Use check_cv instead of _check_cv
* ENH refactor OVO decision function, use it in SVC for sklearn-like
decision_function shape
* FIX avoid memory cost when sampling from large parameter grids
ENH Major to Minor incremental enhancements to the model_selection
Squashed commit messages - (For reference)
Major
-----
* ENH p --> n_labels
* FIX *ShuffleSplit: all float/invalid type errors at init and int error at split
* FIX make PredefinedSplit accept test_folds in constructor; Cleanup docstrings
* ENH+TST KFold: make rng to be generated at every split call for reproducibility
* FIX/MAINT KFold: make shuffle a public attr
* FIX Make CVIterableWrapper private.
* FIX reuse len_cv instead of recalculating it
* FIX Prevent adding *SearchCV estimators from the old grid_search module
* re-FIX In all_estimators: the sorting to use only the 1st item (name)
To avoid collision between the old and the new GridSearch classes.
* FIX test_validate.py: Use 2D X (1D X is being detected as a single sample)
* MAINT validate.py --> validation.py
* MAINT make the submodules private
* MAINT Support old cv/gs/lc until 0.19
* FIX/MAINT n_splits --> get_n_splits
* FIX/TST test_logistic.py/test_ovr_multinomial_iris:
pass predefined folds as an iterable
* MAINT expose BaseCrossValidator
* Update the model_selection module with changes from master
- From #5161
- - MAINT remove redundant p variable
- - Add check for sparse prediction in cross_val_predict
- From #5201 - DOC improve random_state param doc
- From #5190 - LabelKFold and test
- From #4583 - LabelShuffleSplit and tests
- From #5300 - shuffle the `labels` not the `indxs` in LabelKFold + tests
- From #5378 - Make the GridSearchCV docs more accurate.
- From #5458 - Remove shuffle from LabelKFold
- From #5466(#4270) - Gaussian Process by Jan Metzen
- From #4826 - Move custom error / warnings into sklearn.exception
Minor
-----
* ENH Make the KFold shuffling test stronger
* FIX/DOC Use the higher level model_selection module as ref
* DOC in check_cv "y : array-like, optional"
* DOC a supervised learning problem --> supervised learning problems
* DOC cross-validators --> cross-validation strategies
* DOC Correct Olivier Grisel's name ;)
* MINOR/FIX cv_indices --> kfold
* FIX/DOC Align the 'See also' section of the new KFold, LeaveOneOut
* TST/FIX imports on separate lines
* FIX use __class__ instead of classmethod
* TST/FIX import directly from model_selection
* COSMIT Relocate the random_state documentation
* COSMIT remove pass
* MAINT Remove deprecation warnings from old tests
* FIX correct import at test_split
* FIX/MAINT Move P_sparse, X, y defns to top; rm unused W_sparse, X_sparse
* FIX random state to avoid doctest failure
* TST n_splits and split wrapping of _CVIterableWrapper
* FIX/MAINT Use multilabel indicator matrix directly
* TST/DOC clarify why we conflate classes 0 and 1
* DOC add comment that this was taken from BaseEstimator
* FIX use of labels is not needed in stratified k fold
* Fix cross_validation reference
* Fix the labels param doc
FIX/DOC/MAINT Addressing the review comments by Arnaud and Andy
COSMIT Sort the members alphabetically
COSMIT len_cv --> n_splits
COSMIT Merge 2 if; FIX Use kwargs
DOC Add my name to the authors :D
DOC make labels parameter consistent
FIX Remove hack for boolean indices; + COSMIT idx --> indices; DOC Add Returns
COSMIT preds --> predictions
DOC Add Returns and neatly arrange X, y, labels
FIX idx(s)/ind(s)--> indice(s)
COSMIT Merge if and else to elif
COSMIT n --> n_samples
COSMIT Use bincount only once
COSMIT cls --> class_i / class_i (ith class indices) -->
perm_indices_class_i
FIX/ENH/TST Addressing the final reviews
COSMIT c --> count
FIX/TST make check_cv raise ValueError for string cv value
TST nested cv (gs inside cross_val_score) works for diff cvs
FIX/ENH Raise ValueError when labels is None for label based cvs;
TST if labels is being passed correctly to the cv and that the
ValueError is being propagated to the cross_val_score/predict and grid
search
FIX pass labels to cross_val_score
FIX use make_classification
DOC Add Returns; COSMIT Remove scaffolding
TST add a test to check the _build_repr helper
REVERT the old GS/RS should also be tested by the common tests.
ENH Add a tuple of all/label based CVS
FIX raise VE even at get_n_splits if labels is None
FIX Fabian's comments
PEP8
Reference Issue
Fixes #7306.
What does this implement/fix? Explain your changes.
Adds
_pairwisetoOneVsOneClassifierandOneVsAllClassifierand adds a test to check this is properly set.