-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
A Policy Proposal on Missing Data for Future Contributions #9854
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
no. For most models there is no natural way to handle missing data. For trees there are several "natural" ways. We try not to do too much magic under the hood, and so I think it makes most sense if the user selects the imputation algorithm themselves. How would you support missing values in an SVM in a non-surprising way? |
@amueller Thanks for the response. As I had pointed out in my original post, I was only referring to the class of algorithms where:
For instance, you mentioned decision trees and I am myself working on refactoring the GMM so that it can be fit "naturally" without explicit imputation beforehand. Obviously not all models will have a "natural" approach that is established and it would be best to leave those alone, but for the ones that do (or will have) it might be something worth considering. |
I'm not sure what this means. Perhaps what it means is: test that meta-estimators support NaN. Perhaps the same is true of some transformers (?), affinity/distance-based learners, etc. |
Yes, estimators, transformers, metrics, what have you. My point was, if there is a well established method of handling missing value for any of these algorithms then it might make sense to actually have them capable of doing so here in sklearn. Further, my post was geared more towards things that will be added in the future since it might be relatively less work for the original author compared to future refactoring by someone else. Of course, one can also consider the same type of NaN support for already existing algorithms, where feasible, however that would probably take a significant amount of work (is my mostly random guess). |
Methods for handing missing values with SVMs do exist. See e.g. http://www.sciencedirect.com/science/article/pii/S0893608005001292 . However I don't know how good they are. |
I vote for making all the generic enough algorithms like in |
I am -1 for this as well for two main reasons: (1) it would dramatically
increase implementation / review time for even algorithms where handling
missingness is natural, and (2) there are way too many ways to handle
missingness (ignore missing values, data-centric imputation, model-centric
imputation...) for us to choose one possibility as the standard.
…On Thu, Jan 4, 2018 at 1:06 PM, KOLANICH ***@***.***> wrote:
I vote for making all the generic enouyh algorithms like in preprocess to
be usable in the presence of missing values. It's pain in the ass to
reimplement the algos myself to preprocess the data for xgboost only
because sklearn.preprocess is crippled and raises errors if it encounters
nan.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#9854 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADvEEGfAZGbRBSSYkGK18VgoW_JpqIYPks5tHT1hgaJpZM4Po-y8>
.
|
I think the idea of making sure scalers etc work with nans disregarded is
very fair.
|
Oh, whoops. I was responding more to the main thread about estimators, not
the recent post. I am supportive of metrics and preprocessers handling
missingness by ignoring them.
…On Thu, Jan 4, 2018 at 7:05 PM Joel Nothman ***@***.***> wrote:
I think the idea of making sure scalers etc work with nans disregarded is
very fair.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#9854 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADvEELvIXzDNQmK2sEQCVEeiMPc-GEWbks5tHZGSgaJpZM4Po-y8>
.
|
In the meantime, we have moved towards adding native support for missing values in more and more of our estimators. So I guess we can conclude this issue. |
As you might know, a number of us are working towards bringing various imputation transformers into Scikit-Learn. In addition to imputation transformers, there are also a number of PRs working to implement direct model estimation in the presence of missing data without explicit imputation as a preprocessing step. In other words, we are slowly moving in the direction of comprehensively supporting preprocessing and analysis based on datasets with missing values. Given this context, I just wanted to start a discussion on whether it would make sense for Scikit Learn to consider making missing-data support either a "requirement" or a "highly recommended" component in all future algorithms that will be added to the library (to the extent that such support makes sense and is established in the literature, of course). In my humble opinion, not only would this approach lead to a more "robust" set of tools but also saves a lot of time and resources compared to refactoring the code at a later time, especially in situations when it is done by somebody other than the original author themselves. I would very much appreciate it if the core developers and other contributors could comment on this issue. Thank you for your time and consideration.
TL;DR: Would it make sense for Scikit Learn to have a default policy of missing value support in all applicable algorithms that are added to the library in the future?
The text was updated successfully, but these errors were encountered: