|
| 1 | +.. _testing: |
| 2 | + |
| 3 | +Testing |
| 4 | +======= |
| 5 | + |
| 6 | +Matplotlib has a testing infrastructure based on nose_, making it easy |
| 7 | +to write new tests. The tests are in :mod:`matplotlib.tests`, and |
| 8 | +customizations to the nose testing infrastructure are in |
| 9 | +:mod:`matplotlib.testing`. (There is other old testing cruft around, |
| 10 | +please ignore it while we consolidate our testing to these locations.) |
| 11 | + |
| 12 | +.. _nose: http://somethingaboutorange.com/mrl/projects/nose/ |
| 13 | + |
| 14 | +Requirements |
| 15 | +------------ |
| 16 | + |
| 17 | +The following software is required to run the tests: |
| 18 | + |
| 19 | + - nose_, version 1.0 or later |
| 20 | + |
| 21 | + - `Ghostscript <http://pages.cs.wisc.edu/~ghost/>`_ (to render PDF |
| 22 | + files) |
| 23 | + |
| 24 | + - `Inkscape <http://inkscape.org>`_ (to render SVG files) |
| 25 | + |
| 26 | +Running the tests |
| 27 | +----------------- |
| 28 | + |
| 29 | +Running the tests is simple. Make sure you have nose installed and run |
| 30 | +the script :file:`tests.py` in the root directory of the distribution. |
| 31 | +The script can take any of the usual `nosetest arguments`_, such as |
| 32 | + |
| 33 | +=================== =========== |
| 34 | +``-v`` increase verbosity |
| 35 | +``-d`` detailed error messages |
| 36 | +``--with-coverage`` enable collecting coverage information |
| 37 | +=================== =========== |
| 38 | + |
| 39 | +To run a single test from the command line, you can provide a |
| 40 | +dot-separated path to the module followed by the function separated by |
| 41 | +a colon, eg. (this is assuming the test is installed):: |
| 42 | + |
| 43 | + python tests.py matplotlib.tests.test_simplification:test_clipping |
| 44 | + |
| 45 | +An alternative implementation that does not look at command line |
| 46 | +arguments works from within Python:: |
| 47 | + |
| 48 | + import matplotlib |
| 49 | + matplotlib.test() |
| 50 | + |
| 51 | + |
| 52 | +.. _`nosetest arguments`: http://somethingaboutorange.com/mrl/projects/nose/1.0.0/usage.html |
| 53 | + |
| 54 | + |
| 55 | +Running tests by any means other than `matplotlib.test()` |
| 56 | +does not load the nose "knownfailureif" (Known failing tests) plugin, |
| 57 | +causing known-failing tests to fail for real. |
| 58 | + |
| 59 | +Writing a simple test |
| 60 | +--------------------- |
| 61 | + |
| 62 | +Many elements of Matplotlib can be tested using standard tests. For |
| 63 | +example, here is a test from :mod:`matplotlib.tests.test_basic`:: |
| 64 | + |
| 65 | + from nose.tools import assert_equal |
| 66 | + |
| 67 | + def test_simple(): |
| 68 | + """ |
| 69 | + very simple example test |
| 70 | + """ |
| 71 | + assert_equal(1+1,2) |
| 72 | + |
| 73 | +Nose determines which functions are tests by searching for functions |
| 74 | +beginning with "test" in their name. |
| 75 | + |
| 76 | +If the test as side effects that need to be cleaned up, such as |
| 77 | +creating figures using the pyplot interface, use the ``@cleanup`` |
| 78 | +decorator:: |
| 79 | + |
| 80 | + from matplotlib.testing.decorators import cleanup |
| 81 | + |
| 82 | + @cleanup |
| 83 | + def test_create_figure(): |
| 84 | + """ |
| 85 | + very simple example test that creates a figure using pyplot. |
| 86 | + """ |
| 87 | + fig = figure() |
| 88 | + ... |
| 89 | + |
| 90 | + |
| 91 | +Writing an image comparison test |
| 92 | +-------------------------------- |
| 93 | + |
| 94 | +Writing an image based test is only slightly more difficult than a |
| 95 | +simple test. The main consideration is that you must specify the |
| 96 | +"baseline", or expected, images in the |
| 97 | +:func:`~matplotlib.testing.decorators.image_comparison` decorator. For |
| 98 | +example, this test generates a single image and automatically tests |
| 99 | +it:: |
| 100 | + |
| 101 | + import numpy as np |
| 102 | + import matplotlib |
| 103 | + from matplotlib.testing.decorators import image_comparison |
| 104 | + import matplotlib.pyplot as plt |
| 105 | + |
| 106 | + @image_comparison(baseline_images=['spines_axes_positions']) |
| 107 | + def test_spines_axes_positions(): |
| 108 | + # SF bug 2852168 |
| 109 | + fig = plt.figure() |
| 110 | + x = np.linspace(0,2*np.pi,100) |
| 111 | + y = 2*np.sin(x) |
| 112 | + ax = fig.add_subplot(1,1,1) |
| 113 | + ax.set_title('centered spines') |
| 114 | + ax.plot(x,y) |
| 115 | + ax.spines['right'].set_position(('axes',0.1)) |
| 116 | + ax.yaxis.set_ticks_position('right') |
| 117 | + ax.spines['top'].set_position(('axes',0.25)) |
| 118 | + ax.xaxis.set_ticks_position('top') |
| 119 | + ax.spines['left'].set_color('none') |
| 120 | + ax.spines['bottom'].set_color('none') |
| 121 | + |
| 122 | +The first time this test is run, there will be no baseline image to |
| 123 | +compare against, so the test will fail. Copy the output images (in |
| 124 | +this case `result_images/test_category/spines_axes_positions.*`) to |
| 125 | +the `baseline_images` tree in the source directory (in this case |
| 126 | +`lib/matplotlib/tests/baseline_images/test_category`) and put them |
| 127 | +under source code revision control (with `git add`). When rerunning |
| 128 | +the tests, they should now pass. |
| 129 | + |
| 130 | +There are two optional keyword arguments to the `image_comparison` |
| 131 | +decorator: |
| 132 | + |
| 133 | + - `extensions`: If you only wish to test some of the image formats |
| 134 | + (rather than the default `png`, `svg` and `pdf` formats), pass a |
| 135 | + list of the extensions to test. |
| 136 | + |
| 137 | + - `tol`: This is the image matching tolerance, the default `1e-3`. |
| 138 | + If some variation is expected in the image between runs, this |
| 139 | + value may be adjusted. |
| 140 | + |
| 141 | +Known failing tests |
| 142 | +------------------- |
| 143 | + |
| 144 | +If you're writing a test, you may mark it as a known failing test with |
| 145 | +the :func:`~matplotlib.testing.decorators.knownfailureif` |
| 146 | +decorator. This allows the test to be added to the test suite and run |
| 147 | +on the buildbots without causing undue alarm. For example, although |
| 148 | +the following test will fail, it is an expected failure:: |
| 149 | + |
| 150 | + from nose.tools import assert_equal |
| 151 | + from matplotlib.testing.decorators import knownfailureif |
| 152 | + |
| 153 | + @knownfailureif(True) |
| 154 | + def test_simple_fail(): |
| 155 | + '''very simple example test that should fail''' |
| 156 | + assert_equal(1+1,3) |
| 157 | + |
| 158 | +Note that the first argument to the |
| 159 | +:func:`~matplotlib.testing.decorators.knownfailureif` decorator is a |
| 160 | +fail condition, which can be a value such as True, False, or |
| 161 | +'indeterminate', or may be a dynamically evaluated expression. |
| 162 | + |
| 163 | +Creating a new module in matplotlib.tests |
| 164 | +----------------------------------------- |
| 165 | + |
| 166 | +We try to keep the tests categorized by the primary module they are |
| 167 | +testing. For example, the tests related to the ``mathtext.py`` module |
| 168 | +are in ``test_mathtext.py``. |
| 169 | + |
| 170 | +Let's say you've added a new module named ``whizbang.py`` and you want |
| 171 | +to add tests for it in ``matplotlib.tests.test_whizbang``. To add |
| 172 | +this module to the list of default tests, append its name to |
| 173 | +``default_test_modules`` in :file:`lib/matplotlib/__init__.py`. |
| 174 | + |
| 175 | +Using tox |
| 176 | +--------- |
| 177 | + |
| 178 | +`Tox <http://tox.testrun.org/>`_ is a tool for running tests against |
| 179 | +multiple Python environments, including multiple versions of Python |
| 180 | +(e.g.: 2.6, 2.7, 3.2, etc.) and even different Python implementations |
| 181 | +altogether (e.g.: CPython, PyPy, Jython, etc.) |
| 182 | + |
| 183 | +Testing all 4 versions of Python (2.6, 2.7, 3.1, and 3.2) requires |
| 184 | +having four versions of Python installed on your system and on the |
| 185 | +PATH. Depending on your operating system, you may want to use your |
| 186 | +package manager (such as apt-get, yum or MacPorts) to do this, or use |
| 187 | +`pythonbrew <https://github.com/utahta/pythonbrew>`_. |
| 188 | + |
| 189 | +tox makes it easy to determine if your working copy introduced any |
| 190 | +regressions before submitting a pull request. Here's how to use it: |
| 191 | + |
| 192 | +.. code-block:: bash |
| 193 | + |
| 194 | + $ pip install tox |
| 195 | + $ tox |
| 196 | + |
| 197 | +You can also run tox on a subset of environments: |
| 198 | + |
| 199 | +.. code-block:: bash |
| 200 | + |
| 201 | + $ tox -e py26,py27 |
| 202 | + |
| 203 | +Tox processes everything serially so it can take a long time to test |
| 204 | +several environments. To speed it up, you might try using a new, |
| 205 | +parallelized version of tox called ``detox``. Give this a try: |
| 206 | + |
| 207 | +.. code-block:: bash |
| 208 | + |
| 209 | + $ pip install -U -i http://pypi.testrun.org detox |
| 210 | + $ detox |
| 211 | + |
| 212 | +Tox is configured using a file called ``tox.ini``. You may need to |
| 213 | +edit this file if you want to add new environments to test (e.g.: |
| 214 | +``py33``) or if you want to tweak the dependencies or the way the |
| 215 | +tests are run. For more info on the ``tox.ini`` file, see the `Tox |
| 216 | +Configuration Specification |
| 217 | +<http://tox.testrun.org/latest/config.html>`_. |
| 218 | + |
| 219 | +Using Travis CI |
| 220 | +--------------- |
| 221 | + |
| 222 | +`Travis CI <http://travis-ci.org/>`_ is a hosted CI system "in the |
| 223 | +cloud". |
| 224 | + |
| 225 | +Travis is configured to receive notifications of new commits to GitHub |
| 226 | +repos (via GitHub "service hooks") and to run builds or tests when it |
| 227 | +sees these new commits. It looks for a YAML file called |
| 228 | +``.travis.yml`` in the root of the repository to see how to test the |
| 229 | +project. |
| 230 | + |
| 231 | +Travis CI is already enabled for the `main matplotlib GitHub |
| 232 | +repository <https://github.com/matplotlib/matplotlib/>`_ -- for |
| 233 | +example, see `its Travis page |
| 234 | +<http://travis-ci.org/#!/matplotlib/matplotlib>`_. |
| 235 | + |
| 236 | +If you want to enable Travis CI for your personal matplotlib GitHub |
| 237 | +repo, simply enable the repo to use Travis CI in either the Travis CI |
| 238 | +UI or the GitHub UI (Admin | Service Hooks). For details, see `the |
| 239 | +Travis CI Getting Started page |
| 240 | +<http://about.travis-ci.org/docs/user/getting-started/>`_. This |
| 241 | +generally isn't necessary, since any pull request submitted against |
| 242 | +the main matplotlib repository will be tested. |
| 243 | + |
| 244 | +Once this is configured, you can see the Travis CI results at |
| 245 | +http://travis-ci.org/#!/your_GitHub_user_name/matplotlib -- here's `an |
| 246 | +example <http://travis-ci.org/#!/msabramo/matplotlib>`_. |
0 commit comments