Thanks to visit codestin.com
Credit goes to github.com

Skip to content

TST: Add the first test using hypothesis #14440

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 5 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions INSTALL.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,13 @@ Building NumPy requires the following installed software:

This is required for testing numpy, but not for using it.

4) hypothesis__ (optional)

This is required for testing numpy, but not for using it.

Python__ http://www.python.org
pytest__ http://pytest.readthedocs.io
hypothesis__ https://hypothesis.readthedocs.io


.. note::
Expand Down
14 changes: 14 additions & 0 deletions numpy/core/tests/test_arrayprint.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@
)
import textwrap

import hypothesis
import hypothesis.extra.numpy


class TestArrayRepr(object):
def test_nan_inf(self):
x = np.array([np.nan, np.inf])
Expand Down Expand Up @@ -399,6 +403,16 @@ def test_wide_element(self):
"[ 'xxxxx']"
)

@hypothesis.seed(43)
@hypothesis.given(hypothesis.extra.numpy.from_dtype(np.dtype("U")))
def test_any_text(self, text):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main question here is: are the text strings that are generated by hypothesis 100% reproducible over time/versions? And if not, what's the workflow to deal with a bug report for some random string by random user on some OS/install method/etc.?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Commit 28887ac should address this.

I tested the reproducibility with the following approach:
Before making the change in 28887ac

  1. Add a side-effect in the test to append the text to a file ~/text.txt
  2. Set PYTHONHASHSEED=random
  3. Run the test once, mv ~/text.txt ~/text0.txt
  4. Run the test again, diff ~/text.txt ~/text0.txt -> The files are different.
  5. Then make the change in 28887ac
  6. Perform step 3 and 4, the files are now identical.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can also use the @example directive to force a known failing case into the test once reported by a user or by the CI system

a = np.array([text, text, text])
assert_equal(a[0], text)
assert_equal(
np.array2string(a, max_line_width=len(repr(text)) * 2 + 3),
"[{0!r} {0!r}\n {0!r}]".format(text)
)

@pytest.mark.skipif(not HAS_REFCOUNT, reason="Python lacks refcounts")
def test_refcount(self):
# make sure we do not hold references to the array due to a recursive
Expand Down
1 change: 1 addition & 0 deletions test_requirements.txt
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
cython==0.29.14
hypothesis==4.53.1
pytest==5.3.1
pytz==2019.3
pytest-cov==2.8.1
Expand Down