Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@adamgfraser
Copy link
Contributor

Resolves #1899.

Implements support for test annotations by creating a new TestAnnotations service that is automatically provided by the TestEnvironment and allows reading and writing to a test annotation map backed by a Ref. Uses this functionality to implement a new TestAspect#timed that times tests. For example, here is the output from using timed on BoolAlgebraSpec that would be displayed after the normal test results:

[info] Timed 21 tests in 16 s 289 ms:
[info]   + hashCode is consistent with equals: 4 s 145 ms (25.45%)
[info]   + and distributes over or: 2 s 421 ms (14.87%)
[info]   + and is associative: 2 s 390 ms (14.68%)
[info]   + De Morgan's laws: 2 s 270 ms (13.94%)
[info]   + and is commutative: 2 s 163 ms (13.28%)
[info]   + or is associative: 951 ms (5.84%)
[info]   + or distributes over and: 866 ms (5.32%)
[info]   + or is commutative: 603 ms (3.70%)
[info]   + double negative: 441 ms (2.71%)
[info]   + map transforms values: 4 ms (0.03%)
[info]   + isSuccess returns whether result is success: 4 ms (0.03%)
[info]   + implies returns implication of two values: 3 ms (0.02%)
[info]   + either returns disjunction of two values: 3 ms (0.02%)
[info]   + foreach combines multiple values: 3 ms (0.02%)
[info]   + isFailure returns whether result is failure: 2 ms (0.02%)
[info]   + collectAll combines multiple values: 2 ms (0.02%)
[info]   + failures collects failures: 2 ms (0.02%)
[info]   + both returns conjunction of two values: 2 ms (0.01%)
[info]   + all returns conjunction of values: 2 ms (0.01%)
[info]   + any returns disjunction of values: 2 ms (0.01%)
[info]   + as maps values to constant value: 2 ms (0.01%)

The tests are sorted by duration and the percentage of time spent on each test is shown so it is easy to identify which tests are driving overall time for test execution.

@adamgfraser adamgfraser changed the title ZO Test: Support Test Annotations ZIO Test: Support Test Annotations Oct 11, 2019
@adamgfraser adamgfraser requested a review from jdegoes October 11, 2019 19:08
final def mapTestM[R1 <: R, E1 >: E, L1 >: L, T1](f: T => ZIO[R1, E1, T1]): Spec[R1, E1, L1, T1] =
caseValue match {
case SuiteCase(label, specs, exec) =>
Spec.suite(label, specs.map(_.map(_.mapTestM(f))), exec)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.map(_.map(_.mapTest(...)))

😆

trait Service[R] {
def annotate[V](key: TestAnnotation[V], value: V): ZIO[R, Nothing, Unit]
def get[V](key: TestAnnotation[V]): ZIO[R, Nothing, V]
val testAnnotationMap: ZIO[R, Nothing, TestAnnotationMap]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we have to expose this map? It would be nice to make this as tiny a surface area as possible, which means annotate and get.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we want executing a spec to give you the annotation map we need a way to retrieve the annotation map. We could make it package private, though that would mean any user implementing their own test executor would not be able to access the annotation map.

* An `ExecutedSpec` is a spec that has been run to produce test results.
*/
type ExecutedSpec[+L, +E, +S] = Spec[Any, Nothing, L, Either[TestFailure[E], TestSuccess[S]]]
type ExecutedSpec[+L, +E, +S] = Spec[Any, Nothing, L, (Either[TestFailure[E], TestSuccess[S]], TestAnnotationMap)]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea that executing the spec gives you back the annotation map (at each node) always. 👍

It may be empty in some configurations, that's fine.

By the way, this structure is beginning to get complex enough that we should consider a custom type:

sealed trait TestResult[+E, +S] {
  def annotations: TestAnnotationMap
}
object TestResult {
  final case class Failed[E](failure: TestFailure[E], annotations: TestAnnotationMap) extends TestResult[E, S]
  final case class Succeeded[A](success: TestSuccess[S], annotations: TestAnnotationMap) extends TestResult[E, S]
}

Then we can simplify to:

type ExecutedSpec[+L, +E, +S] = 
  Spec[Any, Nothing, L, TestResult[E, S]]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I think we're already using TestResult name for something...)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. After getting this merged my next priority is to simplify some of these types.

/**
* Constructs a new `TestAnnotations` service.
*/
def makeService: UIO[TestAnnotations.Service[Any]] =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this actually helpful, because there is just one annotation map here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is definitely questionable. For the vast majority of users they are just going to use the TestEnvironment and maybe provideSome to enrich that with additional functionality. But if a power user wants to provide their own environment type this would allow them to include test annotations functionality in that type.

/**
* Constructs a new `TestAnnotations` instance.
*/
def make: UIO[TestAnnotations] =
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto for this — is it helpful?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See above.

res <- render(executedSpec.mapLabel(_.toString))
_ <- ZIO.foreach(res.flatMap(_.rendered))(TestLogger.logLine)
_ <- logStats(duration, executedSpec)
_ <- renderTimed(executedSpec).flatMap(_.fold[URIO[TestLogger, Unit]](URIO.unit)(TestLogger.logLine))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we are missing some type of interface something like this:

type TestAnnotationRenderer[A] = 
  TestAnnotation[A] => RenderedResult

Then we can push knowledge of which aspects need rendering to a higher level, and get it out of here, so the user can pass their own rendering for custom test aspects (of course, we will supply the ones for the built in test annotations like timing, etc.).

Copy link
Contributor Author

@adamgfraser adamgfraser Oct 13, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, I was thinking about whether test annotations should contain their own logic for rendering them but was worried that was coupling data with presentation too much. This is a better way to do that. Let me work on adding. Though I think we may need to modify it slightly to be something like

type TestAnnotationRenderer[A] =
  (TestAnnotation[A], ExecutedSpec[Any, Any, Any]) => UIO[String]

Conceptually I think we want the renderer to be able to introspect the test annotations from all the tests in a spec (e.g. to sort the rendered test times from longest to shortest or to report at the level of suites instead of individual tests).

case Spec.TestCase(label, test) =>
test.map {
case (_, annotationMap) =>
val d = annotationMap.get(TestAnnotation.Timing)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should think how to pull this logic out of here and push it higher.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. I think once we have TestAnnotationRenderer, TestReporter can take a List[TestAnnotationRenderer] as input, and the DefaultTestReporter in particular can have a set of default renderers in addition to whichever ones are passed to it. Then the DefaultTestReporter can traverse the executed spec, report all annotations it has renderers for, and either ignore or report an error for any others.

live <- Live.makeService(new DefaultRuntime {}.Environment)
random <- TestRandom.makeTest(TestRandom.DefaultData)
size <- Sized.makeService(100)
testAnnotations <- TestAnnotations.makeService
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't we need (in the most general possible case), one TestAnnotationMap per test? Of course, we can compose all TestAnnotation inside a suite into one, for the suite itself, and so forth; but it seems to me to have the most flexibility, we will need one per test.

(This, btw, explains my confusion about the interface for TestAnnotations, specifically the utility about the make* methods.)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think we want one TestAnnotationMap per test. Right now I am doing this because the TestExecutor provides a separate copy of the TestEnvironment to each test. Does that make sense or would you approach it differently?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I didn't see that. Where is that code?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The TestExecutor uses Spec#provideManaged here and provideManaged (as opposed to provideManagedShared) provides a separate copy of the resources to each test here.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the only problem I see is that if the user chooses to share an environment across tests (e.g. to share database connection costs), then suddenly the annotations will not be per test, but rather, "global" per the sharing.

It would be nice if we could guarantee the same annotations regardless of environmental sharing; and also, if the annotations for suites were computed through semigroup composition of the annotations of their contents (it's a stronger guarantee).

What do you think?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried that initially and ran into some issues. For a lot of annotations we don't actually want to combine them using the semigroup for the test annotation. For example, for the Timing annotation it isn't that interesting to just add up the duration for all the tests, we want to combine them using more of a "map merge" semigroup with mappings from labels to durations. The more serious issue is that we want to write information to test annotations through test aspects but test aspects don't have access to test labels. So if we have each test write its duration to a shared annotation map we don't have any keys to associate with the durations in a sensible way to then aggregate.

Maybe another way we could address is by taking the TestAnnotations service out of the test environment and having the TestExecutor provide it separately to all the tests so it would be impossible or at least very hard for a user to provide a shared annotation map.

I do think the semigroup solution is the most natural one so would love to do that if possible but not sure how to overcome the issues there.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll write up some ideas here soon!

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we use a fiber ref, then during the evaluation of tests, the executor can create a fresh map, and replace the contents of the fiber ref with that fresh map, and then after the test, capture the map, and in the end, generate one map per test, and one map per suite. The suite maps would be generated by using the annotation append functionality, which merges collisions, which means that suite maps would always be an aggregation of their children.

Then the renderer can look at the map and decide what to render, per node, based on the desired rendering test annotations (timing, failure count, success count, etc.).

@adamgfraser
Copy link
Contributor Author

@jdegoes This is ready for another review.

@regiskuckaertz
Copy link
Member

Looks like this need a rebase and then 🚀

@jdegoes
Copy link
Member

jdegoes commented Dec 17, 2019

We spent some time on this and it looked really good.

@adamgfraser adamgfraser requested a review from jdegoes December 18, 2019 23:16
@adamgfraser
Copy link
Contributor Author

Updated now. Sorry for the delay on this.

@jdegoes
Copy link
Member

jdegoes commented Dec 19, 2019

Looks fantastic! Would be nice to see how the "new" approach renders the timings, but anyway, looks good to merge!

@jdegoes jdegoes merged commit c5565c8 into zio:master Dec 19, 2019
@adamgfraser adamgfraser deleted the testannotations branch December 19, 2019 16:50
@adamgfraser
Copy link
Contributor Author

@jdegoes Here is an example of what it looks like right now. The annotations are shown along with each test and suite annotations are composed using the monoid.

I really like your idea of having some kind of more structured tabular output in HTML or something like that. With the current console output the rendering of the annotations are interleaved with the test results so if there are too many annotations it can make it difficult to see the test results and there also isn't any ability to, for example, sort by execution times.

It seems like an ideal end point might be where certain annotations useful for interactive testing were rendered in the console (e.g. seeds of failing tests), and other annotations with more "metrics" were written to a more structured format for visualization or analysis.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ZIO Test: Add Support for Test Annotations

3 participants