Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Streaming style test runs and logging #5408

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
JFoederer opened this issue Apr 21, 2025 · 2 comments
Open

Streaming style test runs and logging #5408

JFoederer opened this issue Apr 21, 2025 · 2 comments

Comments

@JFoederer
Copy link
Contributor

For larger projects, with longer test runs, I love to give early feedback on the progress and pass/fail status thereof. This can be achieved using listeners that report to external systems. The way Robot framework handles results and logging aren’t completely suitable for this. Let me explain using two examples.

Example 1: Logging

When reporting pass results, usually just having the verdict suffices. However, when reporting fail results, people typically also want the accompanying logging. This enables them to start early analysis of the failure. Perhaps this will even trigger them to abort their test run and free up resources for others, while working on a fix.

The issue with the logging is that the available formats (xml and json) are both closed formats. While the test run is in progress, the files are invalid, because there are always some open structures that need to be closed. There are workarounds. For example, by ‘repairing’ (closing) a copy of the incomplete file and then generate a log.html from the partial file.

A cleaner method would be to use a file format that does not need to be closed. Formats that are more suitable for streaming-like environments and are also likely to be valid even after a full-on crash. Pure text, with indentation for scoping, is an easy solution, but some formatting would be welcome. Maybe a format like .md could work?

File format considerations:

  • No closing needed for valid syntax
  • Supports formatting and layouting
  • Supports collapsible sections
  • Supports internal linking
  • Multi-file linking/combining

Example 2: Verdict modification

When reporting intermediate results, before the test run is finished, there is the risk of reporting false positives. Robot can decide to report a test case passed, and then later ‘change its mind’ to fail it after all. The specific situation I have in mind is when a suite teardown fails. This causes all test cases that are part of that suite to fail. For parent suites, the result is recursive, so it can happen that all test cases switch to a fail result at the very last moment. With the whole test suite red, it is difficult to see which test cases failed for a specific reason and which failed due to the failure in the cleanup.

In general, I think it is weird that a test result can be altered after it has been reported. A test result should remain inconclusive until sufficient information is available to pass or fail it. Or to decide that the end result is inconclusive if you know you can’t reach verdict. In the example of the suite teardown, I do not think that the reporting is premature. The test case did end successfully. My suggestion would be to fail the test run and fail the test suite, but to keep all prior pass results at pass.

Once it is possible to fail test runs and test suites separately, there should be no need to revisit test results and change their outcome. Now, reporting and logging can be made to match the streaming concept, without the risk of having different results at different times or in different places.

Summary

  • Report test results as soon as the verdict is decided
  • Do not modify test results once reported
  • Suites and test runs can fail separately, independent of test cases
  • Use a file format that stays valid throughout the test run
@JFoederer
Copy link
Contributor Author

Related issue in RIDE: robotframework/RIDE#2748

@sskfny
Copy link

sskfny commented Apr 21, 2025

Does this applicable for Remote Framework as well ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants