-
Notifications
You must be signed in to change notification settings - Fork 3
Issue #1373 add test plan #1545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just the start of the document. The setup is good enough, but details are lacking for a tester to be able to check off boxes while testing.
- Run the pixi task written below. This will export data to a UGRID NetCDF and | ||
save it under the name <......>. Open QGIS. Click "Layers" > "Add Layer" > | ||
"Add mesh". Insert the path <......> in the text box. This will import the | ||
mesh. Verify if the mesh is rendered properly, if not open an issue on `GitHub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good start to write down the things that you want to test. But for now this is still a list that only another involved developer could go through and understand. A test plan usually also contains information of the expectations, so that it's easy for someone to check what is wrong or right.
For example: what does it mean when it is 'rendered properly'? Or: Whata does it mean that the transient LHM modedl run of 40 years should be 'possible'?
Fill out the information that you are actually looking for, and be specific. Eventually you want to give this document to an unaware and ignorant tester and make sure that they do all the steps.
Functional tests | ||
---------------- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under functional testing I would also expect some edge cases testers can think of to make things go wrong.
|
||
.. code-block:: console | ||
|
||
pixi run user_acceptance |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like something we could automate in the future.
Step 1 would be to run the performance tests before every release (or more often), and make the output available for download so that a user can check it. This makes the machines more reliable, e.g. when we use a docker image.
Step 2 would be to also automate the checks that the human eye performs after the tests are run.
Check if the documentation builds without errors and warnings. If there are | ||
errors or warnings, fix them before releasing in a pull request on `Github | ||
<https://github.com/Deltares/imod-python/pulls>`_ . Next, check if the | ||
documentation pages are rendered correctly and if the information on them is not |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This also sounds like something we can automate, I can imagine there are UI automation tests, maybe with Ranorex, that can test for HTML and CSS output.
Fixes #1373
Description
Adds test plan for the 1.0 release. Two remarks:
Checklist
Issue #nr
, e.g.Issue #737