Thanks to visit codestin.com
Credit goes to github.com

Skip to content

A Policy Proposal on Missing Data for Future Contributions #9854

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
ashimb9 opened this issue Sep 29, 2017 · 10 comments
Closed

A Policy Proposal on Missing Data for Future Contributions #9854

ashimb9 opened this issue Sep 29, 2017 · 10 comments

Comments

@ashimb9
Copy link
Contributor

ashimb9 commented Sep 29, 2017

As you might know, a number of us are working towards bringing various imputation transformers into Scikit-Learn. In addition to imputation transformers, there are also a number of PRs working to implement direct model estimation in the presence of missing data without explicit imputation as a preprocessing step. In other words, we are slowly moving in the direction of comprehensively supporting preprocessing and analysis based on datasets with missing values. Given this context, I just wanted to start a discussion on whether it would make sense for Scikit Learn to consider making missing-data support either a "requirement" or a "highly recommended" component in all future algorithms that will be added to the library (to the extent that such support makes sense and is established in the literature, of course). In my humble opinion, not only would this approach lead to a more "robust" set of tools but also saves a lot of time and resources compared to refactoring the code at a later time, especially in situations when it is done by somebody other than the original author themselves. I would very much appreciate it if the core developers and other contributors could comment on this issue. Thank you for your time and consideration.

TL;DR: Would it make sense for Scikit Learn to have a default policy of missing value support in all applicable algorithms that are added to the library in the future?

@amueller
Copy link
Member

no. For most models there is no natural way to handle missing data. For trees there are several "natural" ways. We try not to do too much magic under the hood, and so I think it makes most sense if the user selects the imputation algorithm themselves.

How would you support missing values in an SVM in a non-surprising way?

@ashimb9
Copy link
Contributor Author

ashimb9 commented Sep 29, 2017

@amueller Thanks for the response. As I had pointed out in my original post, I was only referring to the class of algorithms where:

such support makes sense and is established in the literature, of course

For instance, you mentioned decision trees and I am myself working on refactoring the GMM so that it can be fit "naturally" without explicit imputation beforehand. Obviously not all models will have a "natural" approach that is established and it would be best to leave those alone, but for the ones that do (or will have) it might be something worth considering.

@jnothman
Copy link
Member

jnothman commented Oct 1, 2017

I'm not sure what this means.

Perhaps what it means is: test that meta-estimators support NaN. Perhaps the same is true of some transformers (?), affinity/distance-based learners, etc.

@ashimb9
Copy link
Contributor Author

ashimb9 commented Oct 1, 2017

Yes, estimators, transformers, metrics, what have you. My point was, if there is a well established method of handling missing value for any of these algorithms then it might make sense to actually have them capable of doing so here in sklearn. Further, my post was geared more towards things that will be added in the future since it might be relatively less work for the original author compared to future refactoring by someone else. Of course, one can also consider the same type of NaN support for already existing algorithms, where feasible, however that would probably take a significant amount of work (is my mostly random guess).

@lesshaste
Copy link

lesshaste commented Oct 3, 2017

Methods for handing missing values with SVMs do exist. See e.g. http://www.sciencedirect.com/science/article/pii/S0893608005001292 . However I don't know how good they are.

@KOLANICH
Copy link

KOLANICH commented Jan 4, 2018

I vote for making all the generic enough algorithms like in preprocess to be usable in the presence of missing values. It's pain in the ass to reimplement preprocessing algos myself to preprocess the data for xgboost only because sklearn.preprocess is unsuitable because it is crippled and raises errors if it encounters nan.

@jmschrei
Copy link
Member

jmschrei commented Jan 5, 2018 via email

@jnothman
Copy link
Member

jnothman commented Jan 5, 2018 via email

@jmschrei
Copy link
Member

jmschrei commented Jan 5, 2018 via email

@adrinjalali
Copy link
Member

In the meantime, we have moved towards adding native support for missing values in more and more of our estimators. So I guess we can conclude this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants