You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
as the argument scoring = [scorer1, scorer2, scorer3, ... ]
As a workaround, I'm going to see if my scorer can return a float with extra info in the form of extra properties, like the extra scorers that I want to evaluate (in my particular case, I'm even going to try including confusion matrixes), although I'm quite new at Python, and I don't know if such a thing is possible/easy. Nevertheless such an approach seems unnecessarily twisted.
Learning curve and validation curves are the two functions that are relevant to me. I don't know whether there are any other methods which may be susceptible of this enhancement.
Would you consider as feasible expanding scoring functionality to accept lists of scorers?
The text was updated successfully, but these errors were encountered:
Yes, we definitely want that, and for all cross-validation, gridsearch, etc... methods.
There is some work here: #2759, but it stalled somewhat. We are currently working on a release, but afterwards that is one of my top priorities.
I think this should be closed as a duplicate of #1850; thank you @VoidBrained for voicing your support. I also a couple of years ago toyed with something that could be interpreted as a float (using the __float__ magic method) but also had other information, but it is too hacky: the system needs to just provide a way to do this.
Fitting a model several times when building a validation or a learning curve can be costly while the evaluation of the scorer can be very fast.
It would be interesting if it would be possible to evaluate a list of scorers given to:
sklearn.learning_curve.learning_curve
sklearn.learning_curve.validation_curve
as the argument
scoring = [scorer1, scorer2, scorer3, ... ]
As a workaround, I'm going to see if my scorer can return a float with extra info in the form of extra properties, like the extra scorers that I want to evaluate (in my particular case, I'm even going to try including confusion matrixes), although I'm quite new at Python, and I don't know if such a thing is possible/easy. Nevertheless such an approach seems unnecessarily twisted.
Learning curve and validation curves are the two functions that are relevant to me. I don't know whether there are any other methods which may be susceptible of this enhancement.
Would you consider as feasible expanding scoring functionality to accept lists of scorers?
The text was updated successfully, but these errors were encountered: