Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Interactive HTML summary #128

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 24 commits into from
Feb 2, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 19 additions & 5 deletions README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -114,13 +114,24 @@ and the tests will pass if the images are the same. If you omit the
runs, without checking the output images.


Generating a Failure Summary
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Generating a Test Summary
^^^^^^^^^^^^^^^^^^^^^^^^^

By specifying the ``--mpl-generate-summary=html`` CLI argument, a HTML summary
page will be generated showing the baseline, diff and result image for each
failing test. If no baseline images are configured, just the result images will
be displayed. (See also, the **Results always** section below.)
page will be generated showing the result, log entry and RMS of each test,
and the hashes if configured. The baseline, diff and result image for each
failing test will be shown. If **Results always** is configured
(see section below), images for passing tests will also be shown.
If no baseline images are configured, just the result images will
be displayed.

+---------------+---------------+---------------+
| |html all| | |html filter| | |html result| |
+---------------+---------------+---------------+

As well as ``html``, ``basic-html`` can be specified for an alternative HTML
summary which does not rely on JavaScript or external resources. A ``json``
summary can also be saved. Multiple options can be specified comma-separated.

Options
-------
Expand Down Expand Up @@ -301,6 +312,9 @@ install the latest version of the plugin then do::
The reason for having to install the plugin first is to ensure that the
plugin is correctly loaded as part of the test suite.

.. |html all| image:: images/html_all.png
.. |html filter| image:: images/html_filter.png
.. |html result| image:: images/html_result.png
.. |expected| image:: images/baseline-coords_overlay_auto_coord_meta.png
.. |actual| image:: images/coords_overlay_auto_coord_meta.png
.. |diff| image:: images/coords_overlay_auto_coord_meta-failed-diff.png
Binary file added images/html_all.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/html_filter.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added images/html_result.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
143 changes: 8 additions & 135 deletions pytest_mpl/plugin.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,45 +43,16 @@

import pytest

SUPPORTED_FORMATS = {'html', 'json'}
from pytest_mpl.summary.html import generate_summary_basic_html, generate_summary_html

SUPPORTED_FORMATS = {'html', 'json', 'basic-html'}

SHAPE_MISMATCH_ERROR = """Error: Image dimensions did not match.
Expected shape: {expected_shape}
{expected_path}
Actual shape: {actual_shape}
{actual_path}"""

HTML_INTRO = """
<!DOCTYPE html>
<html>
<head>
<style>
table, th, td {
border: 1px solid black;
}
.summary > div {
padding: 0.5em;
}
tr.passed .status, .rms.passed, .hashes.passed {
color: green;
}
tr.failed .status, .rms.failed, .hashes.failed {
color: red;
}
</style>
</head>
<body>
<h2>Image test comparison</h2>
%summary%
<table>
<tr>
<th>Test Name</th>
<th>Baseline image</th>
<th>Diff</th>
<th>New image</th>
</tr>
"""


def _download_file(baseline, filename):
# Note that baseline can be a comma-separated list of URLs that we can
Expand Down Expand Up @@ -162,7 +133,7 @@ def pytest_addoption(parser):
group.addoption('--mpl-generate-summary', action='store',
help="Generate a summary report of any failed tests"
", in --mpl-results-path. The type of the report should be "
"specified. Supported types are `html` and `json`. "
"specified. Supported types are `html`, `json` and `basic-html`. "
"Multiple types can be specified separated by commas.")

results_path_help = "directory for test results, relative to location where py.test is run"
Expand Down Expand Up @@ -712,105 +683,6 @@ def item_function_wrapper(*args, **kwargs):
else:
item.obj = item_function_wrapper

def generate_stats(self):
"""
Generate a dictionary of summary statistics.
"""
stats = {'passed': 0, 'failed': 0, 'passed_baseline': 0, 'failed_baseline': 0, 'skipped': 0}
for test in self._test_results.values():
if test['status'] == 'passed':
stats['passed'] += 1
if test['rms'] is not None:
stats['failed_baseline'] += 1
elif test['status'] == 'failed':
stats['failed'] += 1
if test['rms'] is None:
stats['passed_baseline'] += 1
elif test['status'] == 'skipped':
stats['skipped'] += 1
else:
raise ValueError(f"Unknown test status '{test['status']}'.")
self._test_stats = stats

def generate_summary_html(self):
"""
Generate a simple HTML table of the failed test results
"""
html_file = self.results_dir / 'fig_comparison.html'
with open(html_file, 'w') as f:

passed = f"{self._test_stats['passed']} passed"
if self._test_stats['failed_baseline'] > 0:
passed += (" hash comparison, although "
f"{self._test_stats['failed_baseline']} "
"of those have a different baseline image")

failed = f"{self._test_stats['failed']} failed"
if self._test_stats['passed_baseline'] > 0:
failed += (" hash comparison, although "
f"{self._test_stats['passed_baseline']} "
"of those have a matching baseline image")

f.write(HTML_INTRO.replace('%summary%', f'<p>{passed}.</p><p>{failed}.</p>'))

for test_name in sorted(self._test_results.keys()):
summary = self._test_results[test_name]

if not self.results_always and summary['result_image'] is None:
continue # Don't show test if no result image

if summary['rms'] is None and summary['tolerance'] is not None:
rms = (f'<div class="rms passed">\n'
f' <strong>RMS:</strong> '
f' &lt; <span class="tolerance">{summary["tolerance"]}</span>\n'
f'</div>')
elif summary['rms'] is not None:
rms = (f'<div class="rms failed">\n'
f' <strong>RMS:</strong> '
f' <span class="rms">{summary["rms"]}</span>\n'
f'</div>')
else:
rms = ''

hashes = ''
if summary['baseline_hash'] is not None:
hashes += (f' <div class="baseline">Baseline: '
f'{summary["baseline_hash"]}</div>\n')
if summary['result_hash'] is not None:
hashes += (f' <div class="result">Result: '
f'{summary["result_hash"]}</div>\n')
if len(hashes) > 0:
if summary["baseline_hash"] == summary["result_hash"]:
hash_result = 'passed'
else:
hash_result = 'failed'
hashes = f'<div class="hashes {hash_result}">\n{hashes}</div>'

images = {}
for image_type in ['baseline_image', 'diff_image', 'result_image']:
if summary[image_type] is not None:
images[image_type] = f'<img src="{summary[image_type]}" />'
else:
images[image_type] = ''

f.write(f'<tr class="{summary["status"]}">\n'
' <td>\n'
' <div class="summary">\n'
f' <div class="test-name">{test_name}</div>\n'
f' <div class="status">{summary["status"]}</div>\n'
f' {rms}{hashes}\n'
' </td>\n'
f' <td>{images["baseline_image"]}</td>\n'
f' <td>{images["diff_image"]}</td>\n'
f' <td>{images["result_image"]}</td>\n'
'</tr>\n\n')

f.write('</table>\n')
f.write('</body>\n')
f.write('</html>')

return html_file

def generate_summary_json(self):
json_file = self.results_dir / 'results.json'
with open(json_file, 'w') as f:
Expand Down Expand Up @@ -843,13 +715,14 @@ def pytest_unconfigure(self, config):
if self._test_results[test_name][image_type] == '%EXISTS%':
self._test_results[test_name][image_type] = str(directory / filename)

self.generate_stats()

if 'json' in self.generate_summary:
summary = self.generate_summary_json()
print(f"A JSON report can be found at: {summary}")
if 'html' in self.generate_summary:
summary = self.generate_summary_html()
summary = generate_summary_html(self._test_results, self.results_dir)
print(f"A summary of the failed tests can be found at: {summary}")
if 'basic-html' in self.generate_summary:
summary = generate_summary_basic_html(self._test_results, self.results_dir)
print(f"A summary of the failed tests can be found at: {summary}")


Expand Down
Empty file added pytest_mpl/summary/__init__.py
Empty file.
Loading