test-suite: use pytest features, improve coverage, fix mistakes #520
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains a few commits as part of an attempt to make full use of
pytestto run the tests. It increases the coverage intest_ampgo.py,test_basinhopping.py,test_brute.pyandtest_itercb.py.As an outcome of the increased coverage a few oversights were found (incidentally?) all introduced by me ;) Anyway, the following fixes were made:
show_candidatesis now explicitThe code would probably benefit from some re-factoring, but that should be done at the same time as we address the
_calculate_statisticsfunction. For example, right nowresult.residualis set when the fit is aborted, but still needs to be calculated if it finishes normally. In many instances we do a check likeif not result.abortedand then calculate/fill in the MinimizerResult attributes. After that we calculate the statistics, which should be done regardless, and then -only if not aborted- we should calculate the covariance matrix. There are a bit too many if-statements involved there, but I cannot get easily a consistent way of doing this for all solvers. So my changes for now work as intended, and making big changes just before a release does not seem smart... Ideally, we would tackle this once the test coverage has increased and we are not as likely to miss potential issues.Finally, I updated the dependencies in
.travis.ymlto addnumdifftoolsand installemceeonly forversion == latest. These dependencies andpytestare installed withpipbecausecondadoes not have consistent versions available for all Python versions we test.