-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
[MRG] Make tests runnable with pytest without error #8246
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
ping @karandesai-96 just in case you are interested to have some kind of explanation about the weird failure we were seeing with |
Pytest >= 3.0.0 has dropped support of So |
|
@lesteve, I'm not yet familiar with py.tests or its quirks, so I feel under-qualified to give a thorough review. But I appreciate the risk of merge conflicts, so will do what I can. |
Regarding the quirk you describe above, is one option to wrap the generators so that all yielded arguments are deepcopied during iteration? |
I realise that won't fix the issue with closures. |
Without a wrapper like that, it seems one needs to carefully check that all arguments are immutable or otherwise unmodified. I'd rather something with assurances like a deepcopy of all arguments..? |
I think if we're going to support pytest we need CI to ensure it continues to work. Add a travis build that uses pytest?? |
I was thinking about pytest as a "use at your own risks" kind of thing for now, hence the minimal PR. Given the limited set of changes I had to do to get the tests running without error with pytest, I am strongly encouraged to think that going forward the tests pass will pass with pytest if they pass with nose. As soon as this PR is merged, I will use pytest to run the tests and I am willing to iron out any quirks, if any, that we find along the road. |
I don't see much harm in a pytest CI run during the transition. @ogrisel? |
1b55331
to
99dad77
Compare
Codecov Report
@@ Coverage Diff @@
## master #8246 +/- ##
==========================================
+ Coverage 94.75% 94.75% +<.01%
==========================================
Files 342 342
Lines 60801 60804 +3
==========================================
+ Hits 57609 57612 +3
Misses 3192 3192
Continue to review full report at Codecov.
|
99dad77
to
abc2787
Compare
I added a build on Travis that uses pytest. Note that at the moment pytest does not run doctests from the rst in the |
@ogrisel feel free to take a look at this if you have a bit of time |
abc2787
to
dff34c3
Compare
Errors were due to pytest quirks with (deprecated) yield support.
and tweak pytest settings in setup.cfg
dff34c3
to
ba64c10
Compare
Codecov Report
@@ Coverage Diff @@
## master #8246 +/- ##
==========================================
+ Coverage 94.75% 94.75% +<.01%
==========================================
Files 342 342
Lines 60801 60804 +3
==========================================
+ Hits 57609 57612 +3
Misses 3192 3192
Continue to review full report at Codecov.
|
Codecov Report
@@ Coverage Diff @@
## master #8246 +/- ##
==========================================
+ Coverage 94.75% 94.75% +<.01%
==========================================
Files 342 342
Lines 60801 60816 +15
==========================================
+ Hits 57609 57624 +15
Misses 3192 3192
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I think this a good first step and it's nice to have both nosetests and pytest work for the next release. After 0.19 we can refactor the test suite deeper to make it more py.test native (using parametrize
instead of yields) and drop nosetests support in 0.20.
Squash merged. The changes are small so it should be easy to revert shall it causes any problem. |
This is a good move 😄 |
nosetests -s --with-coverage --with-timer --timer-top-n 20 sklearn | ||
else | ||
nosetests -s --with-timer --timer-top-n 20 sklearn | ||
TEST_CMD="$TEST_CMD --with-coverage" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure you mean to repeat --with-coverage
here and in setting TEST_CMD
above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I'll fix that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI #8444.
* Make tests runnable with pytest without error. Errors were due to pytest quirks with (deprecated) yield support. * Add pytest build on Travis and tweak pytest settings in setup.cfg * Tweak comment
* Make tests runnable with pytest without error. Errors were due to pytest quirks with (deprecated) yield support. * Add pytest build on Travis and tweak pytest settings in setup.cfg * Tweak comment
* Make tests runnable with pytest without error. Errors were due to pytest quirks with (deprecated) yield support. * Add pytest build on Travis and tweak pytest settings in setup.cfg * Tweak comment
* Make tests runnable with pytest without error. Errors were due to pytest quirks with (deprecated) yield support. * Add pytest build on Travis and tweak pytest settings in setup.cfg * Tweak comment
* Make tests runnable with pytest without error. Errors were due to pytest quirks with (deprecated) yield support. * Add pytest build on Travis and tweak pytest settings in setup.cfg * Tweak comment
* Make tests runnable with pytest without error. Errors were due to pytest quirks with (deprecated) yield support. * Add pytest build on Travis and tweak pytest settings in setup.cfg * Tweak comment
Related to #7319. This is the minimal set of changes to make the tests runnable with pytest. All in all it amounts to:
check_
functions using closures to module-levelErrors were due to pytest quirks with (deprecated) yield support. AFAICT the underlying reason is that pytest first collects the tests (collecting the test function and the arguments) and then runs them. It is somewhat easy to modify the arguments at collection time which creates suprises during the test run.
See below for an example test highlighting the quirk:
Output: