Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@alterationx10
Copy link
Contributor

This is an exploration from #7883

For reasons I don't understand, in ZTestRunnerJS.scala the Task trait for ZTestTask expects method def execute(eventHandler: EventHandler, loggers: Array[Logger], continuation: Array[Task] => Unit): Unit, and doesn't call the main def execute(eventHandler: EventHandler, loggers: Array[Logger]): Array[Task] from BaseTestTask.

In ZTestRunnerNative.scala, that isn't the case - but there is no override of the execute method, and the default implementation crashes.

This PR copies over the JS implementation of execute, and implements override def execute(eventHandler: EventHandler, loggers: Array[Logger]): Array[Task] which calls it. There was already a non-overriding execute method, however it also seemed to crash, which is why I copied over the JS version.

The next peculiarity is considering running the tests:

coreTestsNative/test will hang at thezio.ZIOSpec suite, but work via coreTestsNative/testOnly zio.ZIOSpec.

If I remove the first test there:

    suite("heap")(
      test("unit.forever is safe") {
        for {
          fiber <- ZIO.unit.forever.fork
          _     <- Live.live(ZIO.sleep(5.seconds))
          _     <- fiber.interrupt
        } yield assertCompletes
      }
    ),

Then coreTestsNative/test will not hang (and passes 🎉 ).

I have not yet attempted to run native tests outside of core-tests.

@adamgfraser
Copy link
Contributor

Great progress. That test failure would indicate that possibly yielding is not working correctly. In a single threaded environment that test relies on the child fiber yielding control at some point so that other fibers have a chance to run. Looking for other areas where there is platform specific code and making sure the Native implementation is consistent with the Scala.js one could be a good place to start. Will take a look in more detail in the morning.

@alterationx10
Copy link
Contributor Author

I think I see now where the difference in the execute methods come from.

  .jsSettings(
    jsSettings,
    libraryDependencies ++= Seq(
      ("org.scala-js" %% "scalajs-test-interface" % scalaJSVersion).cross(CrossVersion.for3Use2_13)
    )
  )
  .nativeSettings(
    nativeSettings,
    libraryDependencies ++= Seq("org.scala-native" %%% "test-interface" % nativeVersion)
  )

scalajs-test-interface has the extra execute method:
https://www.scala-js.org/api/scalajs-test-interface/1.12.0/sbt/testing/Task.html

@adamgfraser
Copy link
Contributor

Yes exactly.

I spent some time looking at that test failure.

    suite("heap")(
      test("unit.forever is safe") {
        for {
          fiber <- ZIO.unit.forever.fork
          _     <- Live.live(ZIO.sleep(5.seconds))
          _     <- fiber.interrupt
        } yield assertCompletes
      }
    ),

It looks like in the above test we fork the fiber and initiate the sleep but the sleep never returns. I verified by adding debug statements that the live clock is being called and that it is calling Timer.timeout in ClockPlatformSpecific. Some other interesting observations are that this test completes if I just run this test or if I just run ZIOSpec, but it fails if I run the entire test suite. Also, excluding this test every test for ZIO Core, ZIO Steam, and ZIO Test passes.

Thinking about what is different about this test, the fiber yields but that thread never stops executing, so it is assuming that the underlying platform allows the timer to execute even when there is a thread that is running forever versus suspending. I'm not sure how this relates to the fact that the test only fails when other tests are run. Possibly those other tests also use the timer in a way that corrupts its internal state?

@alterationx10
Copy link
Contributor Author

Yes, sorry I wasn't so clear - the tests were passing for me as well, except when it hangs running the full suite instead of testOnly.

Another interesting behavior I just encountered:

If I run coreTestsNative/testOnly zio.ScopeSpec zio.ZIOSpec it will hang.
If I run coreTestsNative/testOnly zio.ZIOSpec zio.ScopeSpec it completes.

So it seems ok if it's the first test, but causes trouble when not.

I'm not saying its a proper solution, but AAAZIOSpec has a nice sound to it 😉 (j/k)

I had experimented with configuring sbt with parallelExecution and concurrentRestrictions yesterday as well, to no avail.

@adamgfraser
Copy link
Contributor

The tests should already be run sequentially since we have parallel execution set to false and that appeared to be the case for me. So I think it is running something before this test versus running it concurrently with it that is causing the issue.

@alterationx10
Copy link
Contributor Author

Just another peculiarity:

Running coreTestsNative/testOnly zio.ScopeSpec zio.ZIOSpec

with an updated first test to

    suite("heap")(
      test("unit.forever is safe") {
        for {
          _     <- Console.printLine("Sleepy 1")
          _     <- Live.live(ZIO.sleep(5.seconds))
          _     <- Console.printLine("Sleepy Done")
//          fiber <- ZIO.unit.forever.fork
//          _     <- Live.live(ZIO.sleep(5.seconds))
//          _     <- fiber.interrupt
        } yield assertCompletes
      }
    ),

and the test bails early as successful, but doesn't actually complete:

...
    + preserves order of nested finalizers - 3 ms
  + withEarlyRelease - 2 ms
+ ZIOSpec
  + heap
Sleepy 1
[info] Done
[success] Total time: 44 s, completed Mar 9, 2023, 11:59:29 AM

@alterationx10
Copy link
Contributor Author

I thought about how the runner was evaluating the spec, and then dusted off some code I used the last time I was testing zio effects for scala native... I think it works now.

Instead of using

val fiber = Runtime.default.unsafe.fork {
....
}
...

I switched to this

    Unsafe.unsafe { implicit unsafe =>
      runtime.unsafe
        .run(logic)
        .getOrThrowFiberFailure()
    }

the one thing is doesn't have is something corresponding to this part for logging:

    fiber.unsafe.addObserver { exit =>
      exit match {
        case Exit.Failure(cause) => Console.err.println(s"$runnerType failed.")
        case _                   =>
      }
      continuation(Array())
    }(Unsafe.unsafe)

... but I imagine it might need some more slight clean up anyway 😄

@adamgfraser
Copy link
Contributor

So I think the underlying question here is whether we can block for a result. Runtime.unsafe.run is going to block for the result to be available. So if we can do that it is fine but I thought the reason we were doing this was to try to avoid that.

@alterationx10
Copy link
Contributor Author

My current attempts at using Runtime.default.unsafe.fork don't work as before. The JVM version seems to Await.result the future, so does this blocking way seem close to that? The method I replaced was done in a similar way to the scala js version, but it also wasn't called, as far as I can tell.

I though a could de-duplicate a file, but was wrong - I reverted it, so hopefully the Scala 3 code compiles again. I think the native tests are dying due to heap size... I just found where I can bump that, so I'll adjust.

@adamgfraser
Copy link
Contributor

On the JVM we can block for a result because we actually have multiple threads so at the end of the world if we have to we can just block the thread for the result. We can't do that on Scala.js.

@alterationx10
Copy link
Contributor Author

Peaking at the ZIO 1 branch,

For the native runner I see:

def execute(eventHandler: EventHandler, loggers: Array[Logger], continuation: Array[Task] => Unit): Unit =
    Runtime((), specInstance.platform)
      .unsafeRunAsync((sbtTestLayer(loggers).build >>> run(eventHandler).toManaged_).use_(ZIO.unit)) { exit =>
        exit match {
          case Exit.Failure(cause) => Console.err.println(s"$runnerType failed: " + cause.prettyPrint)
          case _                   =>
        }
        continuation(Array())
      }

but I don't think it's being called, since it has the continuation in the signature. In the case, the default/JVM version in the shared code is:

override def execute(eventHandler: EventHandler, loggers: Array[Logger]): Array[Task] =
    try {
      Runtime((), specInstance.platform).unsafeRun {
        run(eventHandler)
          .provideLayer(sbtTestLayer(loggers))
          .onError(e => UIO(println(e.prettyPrint)))
      }
      Array()
    } catch {
      case t: Throwable =>
        t.printStackTrace()
        throw t
    }

I suppose there is was actually calling the default method as well? 🤔

@adamgfraser
Copy link
Contributor

Yes it ran the generic method. Which raises the question of why your original change made any difference since it clearly blocks for a result to be available.

@alterationx10
Copy link
Contributor Author

My original change was to override and not use the default, but instead call the local method.

The default is a runtime.unsafe.runToFuture and it then calls an Await.result. It seemed to be the Await.result that was causing the issue.

@adamgfraser
Copy link
Contributor

Right but that doesn't really make sense because run is awaiting the result too.

@alterationx10
Copy link
Contributor Author

Nobody said I was making any sense yet 🤣 But, going back to the original behavior of the fail-fast error: from #7833

Caused by: java.util.concurrent.TimeoutException: Future timed out after [Duration.Inf]

It seems to die right away while trying to call Await.result, so I've been taking that as "It's not failing because I'm blocking, it's failing because I'm specifically calling Await.result"

Just searching around the scala-native repo, in the I see junit-async section I see this which is kind of interesting. (I haven't found much on isMultithreadingEnabled - doesn't seem available on a released version?)

package scala.scalanative.junit

import scala.concurrent.{Await, Future}
import scala.concurrent.duration.Duration
import scala.scalanative.meta.LinktimeInfo.isMultithreadingEnabled

package object async {
  type AsyncResult = Unit
  def await(future: Future[_]): AsyncResult = {
    if (isMultithreadingEnabled)
      Await.result(future, Duration.Inf)
    else {
      scala.scalanative.runtime.loop()
      future.value.get.get
    }
  }
}

However, in our runner code: if I use a runtime.unsafe.runToFuture implementation and I replace Await.result with

scala.scalanative.runtime.loop()
future.value.get.get

then it performs like my first change, where the tests hang at ZIOSpec when running multiple tests.

@alterationx10
Copy link
Contributor Author

With what's currently in place (less one fmt issue 👼 ), locally I can run testNative and all tests pass except these:

[info]   - CancelableFutureSpec - auto-kill regression
[info] Timeout of 2 m exceeded.
[info] 
[info]     - FiberRefSpec - Create a new FiberRef with a specified value and check if: - the value of all fibers in inherited when running many ZIOs with collectAllPar
[info] Timeout of 2 m exceeded.
[info]   - FiberRefSpec - zipPar
[info] Timeout of 2 m exceeded.
[info] Done
[error] Failed tests:
[error]         zio.CancelableFutureSpec
[error]         zio.FiberRefSpec
[error] (coreTestsNative / Test / test) sbt.TestsFailedException: Tests unsuccessful

(and zipPar succeeded when I ran only FiberRefSpec - but the other two didn't seem to work for me locally)

It seems the current outstanding issues are

  1. runtime.unsafe.fork vs runtime.unsafe.run
  2. CI testPlatforms seems to keep failing due to heap space
  3. CancelableFutureSpec
  4. FiberRefSpec

For 1, it seems like the the updated ZIO 1 code runs these tests synchronously - so is that acceptable here too? Or do we want to dig into that issue further?

@adamgfraser
Copy link
Contributor

Weren't all tests except unit.forever is safe passing with your first commit? That was at least my observation running your branch locally.

@alterationx10
Copy link
Contributor Author

I was focusing more on getting passed the ZIOSpec issue that I don't quite remember. I just checked out that commit, and ran FiberRefSpec; it failed with the same time-out at the value of all fibers in inherited when running many ZIOs with collectAllPar, so it seems like that was not the case.

@adamgfraser
Copy link
Contributor

It looks like the failure in FiberRefSpec is due to this test:

      test("the value of all fibers in inherited when running many ZIOs with collectAllPar") {
        for {
          fiberRef <- FiberRef.make[Int](0, _ => 0, _ + _)
          _        <- ZIO.collectAllPar(List.fill(100000)(fiberRef.update(_ + 1)))
          value    <- fiberRef.get
        } yield assert(value)(equalTo(100000))
      },

If I reduce the size from 100,000 to 10,000 the test passes. Could be a problem with resources or there could be something in the platform specific implementation that is algorithmically pathalogical.

@alterationx10
Copy link
Contributor Author

It looks like the issue with that test is the default unbounded parallelism. I haven't dug into what the max value I can make it break at is, but the test below (only adding .withParallelism(100)) will pass.

      test("the value of all fibers in inherited when running many ZIOs with collectAllPar") {
        for {
          fiberRef <- FiberRef.make[Int](0, _ => 0, _ + _)
          _        <- ZIO.collectAllPar(List.fill(100000)(fiberRef.update(_ + 1))).withParallelism(100)
          value    <- fiberRef.get
        } yield assert(value)(equalTo(100000))
      },

@alterationx10
Copy link
Contributor Author

alterationx10 commented Mar 11, 2023

For CancelableFutureSpec, the suite will pass if add @@ TestAspect.sequential to it. Perhaps this ties into the run/fork issue for the runner.

Edit:
So strange - the test suite is now not working @@ TestAspect.sequential after a clean/re-test 😞

@alterationx10
Copy link
Contributor Author

More investigating on CancelableFutureSpec:

(This is all done in the same sbt session)

If I do a clean, and then a coreTestsNative/testOnly *CancelableFutureSpec it will time out at the auto-kill regression test (but the others seem to pass).

If I run coreTestsNative/testOnly *CancelableFutureSpec again, that test will still time out.

If I run coreTestsNative/testOnly *CancelableFutureSpec yet again, the test does not hang.


Some times it hangs, but if I kill it with ctrl-c before it times out, it starts working on subsequent runs.

Absolutely maddening!

@adamgfraser
Copy link
Contributor

I think we need to be addressing the underlying causes of these issues rather than changing the tests.

@alterationx10
Copy link
Contributor Author

I agree. For the CancelableFutureSpec, after chasing some red herrings, I decided to lower the count for nonFlaky and it seems to consistently work. It got passed it in CI as well, but another failed there that doesn't on my machine (ZLayerSpec - "preserves failures"). It's also a nonFlaky` one. On my machine, I bumped up the nonFlaky count higher and got it to time out. I was mostly poking at the tests to help me narrow down where to look next 👼

@adamgfraser
Copy link
Contributor

👍

@alterationx10
Copy link
Contributor Author

Looking back at the issue of running with .fork vs.run:

I can run this app, and there is no issue:

object Tester extends scala.App {
  implicit val trace = Trace.empty
  implicit val unsafe = Unsafe.unsafe

  Runtime.default.unsafe.fork {
    for {
      _ <- zio.Console.printLine("Spec 1")
    } yield ()
  }

  Runtime.default.unsafe.fork {
    for {
      _ <- zio.Console.printLine("Spec 2 before")
      _ <- ZIO.sleep(5.seconds)
      _ <- zio.Console.printLine("Spec 2 after")
    } yield ()
  }

}

However, if I use these two reproducer specs (i have them locally) with coreTestsNative/testOnly zio.AltSpec1 zio.AltSpec2, the same issue arises where the test just stops - with no clear sign of error:

object AltSpec1 extends ZIOBaseSpec {
  override def spec: Spec[TestEnvironment with Scope, Any] =
    suite("Spec 1")(
      suite("succeed")(
        test("be a succeeded test that's run first") {
          for {
            _ <- zio.Console.printLine("Spec 1")
          } yield assertCompletes
        }
      )
    )
}
object AltSpec2 extends ZIOBaseSpec {
  override def spec: Spec[TestEnvironment with Scope, Any] =
    suite("Spec 2")(
      suite("sleepy stuff")(
        test("do a sleepy thing") {
          (for {
            _ <- zio.Console.printLine("Spec 2 before")
            _ <- ZIO.sleep(5.seconds)
            _ <- zio.Console.printLine("Spec 2 after")
          } yield assertCompletes)
        } @@ TestAspect.withLiveClock
      )
    )
}

This makes me feel like it's something running the test code, vs "a core ZIO issue".

Looking at TestExecutor.scala at line 119 is where I've narrowed in the most; it looks like this bit is responsible for running the test effect:

...
                    result  <- ZIO.withClock(ClockLive)(test.timed.either)
...

If I put a log before and after, I can see the before for AltSpec2, but not after. Everything up to that line will work. I've even separated out the .timed portion of this code, and the same thing happens.

If I try to add a sleep before running the test, I can see the sleep dies there before the test is run. E.g. i won't see "after sleep" for AltSpec2:

                    _       <- ZIO.unit.debug("before sleep")
                    _       <- ZIO.withClock(ClockLive)(ZIO.sleep(1.second))
                    _       <- ZIO.unit.debug("after sleep")
                    result  <- ZIO.withClock(ClockLive)(test.timed.either)

To be continued...

@adamgfraser
Copy link
Contributor

I think that is the same issue I identified above regarding ZIO.sleep in the original failing test.

@alterationx10
Copy link
Contributor Author

Yes, and this was me circling back to it 😁 once we crack that nut, we can tackle the ‘withParallelism’ issue, and then determine if those nonFlaky tests are just resource intensive or not.

@adamgfraser
Copy link
Contributor

😀

Can we reproduce the issue without ZIO?

@alterationx10
Copy link
Contributor Author

I've only been able to reproduce it on a narrow subset of ZIO 🙃

I only see it fail when being run in a spec, as part of multiple specs, AND that spec isn't the first run! 😭

I built against a local version of native-loop-core, and was logging all the timer calls. I see clears up-to where we're seeing the issue (i.e. I can see the timers being set for timeouts for all the specs, etc.. and they clear when done).

Probably worth investigating some more - I'll see if I can try to recreate the issue outside of ZIO as well.

Also, another interesting thing, just running the two tests: coreTestsNative/testOnly zio.AltSpec1 zio.AltSpec2: If I look at target/test-reports for AltSpec2

<?xml version='1.0' encoding='UTF-8'?>
<testsuite hostname="max.local" name="zio.AltSpec2" tests="1" errors="0" failures="0" skipped="0" time="0.004" timestamp="2023-03-13T23:49:18">
          <properties>
[truncated this because too long]
      </properties>
          <testcase classname="zio.AltSpec2" name="Spec 1 - succeed - be a succeeded test that's run first" time="0.004">
                      
                    </testcase>
          <system-out><![CDATA[]]></system-out>
          <system-err><![CDATA[]]></system-err>
        </testsuite>

It looks like the test case for Spec 1 leaked in? Seeing a <testcase> in there at all seems a bit off, so maybe it's a byproduct of another issue. A quick glance seems to show if I run multiples, the former leakes into the latter in the same way

Oh, hey - this might be interesting. If I add in a third test I haven't been running yet, coreTestsNative/testOnly zio.AltSpec3, I actually get some error messaging.

The test:

object AltSpec3 extends ZIOBaseSpec {
  override def spec: Spec[TestEnvironment with Scope, Any] =
    suite("Spec 3")(
      suite("do an unbounded parallelism")(
        test("a thing"){
          for {
            fiberRef <- FiberRef.make[Int](0, _ => 0, _ + _)
            _ <- ZIO.collectAllPar(List.fill(100000)(fiberRef.update(_ + 1))).withParallelism(100) // TODO get back to default
            value <- fiberRef.get
          } yield assert(value)(equalTo(100000))
        }
      )
    )
}

The error

[info] Starting process '/Users/alt/Projects/alterationx10/zio/core-tests/native/target/scala-2.13/core-tests-test-out' on port '62838'.
+ Spec 3
  + do an unbounded parallelism loadedTestFrameworks 0s
    + a thing - 370 ms
[warn] Force close java.lang.IllegalStateException: Unknown opcode: 4
[warn] Force close java.lang.RuntimeException: Process /Users/alt/Projects/alterationx10/zio/core-tests/native/target/scala-2.13/core-tests-test-out finished with non-zero value 137
[error] Test runner interrupted by fatal signal 9
[error] stack trace is suppressed; run last coreTestsNative / Test / testOnly for the full output
[error] (coreTestsNative / Test / testOnly) scala.scalanative.testinterface.common.RPCCore$ClosedException: scala.scalanative.testinterface.NativeRunnerRPC$RunTerminatedException
[error] Total time: 27 s, completed Mar 14, 2023, 12:01:14 AM
sbt:zio> last  coreTestsNative / Test / testOnly
[debug] Running TaskDef(zio.AltSpec3, scala.scalanative.testinterface.common.Serializer$FingerprintSerializer$$anon$5@522c86cf, false, [SuiteSelector])
[error] scala.scalanative.testinterface.common.RPCCore$ClosedException: scala.scalanative.testinterface.NativeRunnerRPC$RunTerminatedException
[error]         at scala.scalanative.testinterface.common.RPCCore.helpClose(RPCCore.scala:213)
[error]         at scala.scalanative.testinterface.common.RPCCore.close(RPCCore.scala:204)
[error]         at scala.scalanative.testinterface.NativeRunnerRPC.close(NativeRunnerRPC.scala:54)
[error]         at scala.scalanative.testinterface.NativeRunnerRPC.$anonfun$new$1(NativeRunnerRPC.scala:43)
[error]         at scala.scalanative.testinterface.NativeRunnerRPC.$anonfun$new$1$adapted(NativeRunnerRPC.scala:42)
[error]         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
[error]         at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423)
[error]         at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
[error]         at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1311)
[error]         at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1841)
[error]         at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1806)
[error]         at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
[error] Caused by: scala.scalanative.testinterface.NativeRunnerRPC$RunTerminatedException
[error]         at scala.scalanative.testinterface.NativeRunnerRPC.$anonfun$new$1(NativeRunnerRPC.scala:43)
[error]         at scala.scalanative.testinterface.NativeRunnerRPC.$anonfun$new$1$adapted(NativeRunnerRPC.scala:42)
[error]         at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
[error]         at java.base/java.util.concurrent.ForkJoinTask$RunnableExecuteAction.exec(ForkJoinTask.java:1423)
[error]         at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
[error]         at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1311)
[error]         at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1841)
[error]         at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1806)
[error]         at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:177)
[error] Caused by: java.lang.IllegalStateException: Unknown opcode: 4
[error]         at scala.scalanative.testinterface.common.RPCCore.$anonfun$handleMessage$1(RPCCore.scala:94)
[error]         at scala.scalanative.testinterface.common.RPCCore.$anonfun$handleMessage$1$adapted(RPCCore.scala:43)
[error]         at scala.scalanative.testinterface.common.Serializer$.withInputStream(Serializer.scala:52)
[error]         at scala.scalanative.testinterface.common.RPCCore.handleMessage(RPCCore.scala:43)
[error]         at scala.scalanative.testinterface.NativeRunnerRPC.$anonfun$runner$1(NativeRunnerRPC.scala:31)
[error]         at scala.scalanative.testinterface.NativeRunnerRPC.$anonfun$runner$1$adapted(NativeRunnerRPC.scala:31)
[error]         at scala.scalanative.testinterface.ComRunner$$anon$1.run(ComRunner.scala:61)
[error] (coreTestsNative / Test / testOnly) scala.scalanative.testinterface.common.RPCCore$ClosedException: scala.scalanative.testinterface.NativeRunnerRPC$RunTerminatedException

looks like I might be building Scala native locally tomorrow 😄

@alterationx10
Copy link
Contributor Author

alterationx10 commented Mar 16, 2023

I'm going to convert this to a draft PR so the tests don't run for now.

Some updates for visibility: I believe he above error from testinterface was due to events getting sent to sbt twice (so the runner thought the jobs were done and shut down, etc..)

Going back to square one, if we don't do anything with the specs, and just look at calling execute with a simple zio-for comprehension with some example code, there are 4 points listed i the comments (assuming more than one test run - always seems to work on one test run):

 // An implementation of Clock.sleep that returns a Right
  def sleep2(duration: => Duration)(implicit trace: Trace): UIO[Unit] =
    ZIO.asyncInterrupt { cb =>
      val canceler =
        Clock.globalScheduler.schedule(() => cb(ZIO.unit), duration)(
          Unsafe.unsafe
        )
             Right(ZIO.unit)
      // Left(ZIO.attempt(canceler()).orDie)
    }

  def execute(): Array[Unit] = {
    Runtime.default.unsafe.fork {
      (
        for {
          _ <- ZIO.unit.debug("1")
          _ <- Clock.sleep(1.milli) // 1. This will break on sub-sequent calls
          // 2. Calling the Timer directly will not fail, and we can use the canceller as well.
          //   canceler: scala.scalanative.loop.Timer =
          //     scala.scalanative.loop.Timer
          //       .timeout(FiniteDuration(1, TimeUnit.MILLISECONDS))(() => ())
          //   _ = canceler.clear()
          // 3 .If we use a version of Clock.sleep that returns a Right instead of a Left, it works
        //   _ <- sleep2(1.milli)
          _ <- ZIO.unit.debug("2") // <- you won't see this in the non-working cases
        } yield ()
      )
    }(Trace.empty, Unsafe.unsafe)
    // 4. If we use Clock.sleep, and then do an immediate .run of a Clock.sleep, things start working again.
    // Runtime.default.unsafe
    //   .run(ZIO.sleep(1.nano).debug("snooze"))(Trace.empty, Unsafe.unsafe)
    Array()
  }

Edit - I guess making this a draft PR doesn't stop ci tests from running 😬

@sideeffffect
Copy link
Member

@alterationx10 Hello Mark. Do you think you could resuscitate this PR? It would be awesome to have running tests in ZIO 2 after all that time 😊

@alterationx10
Copy link
Contributor Author

@sideeffffect Yes - this PR still haunts my dreams 😆 I'll try and re-ignite the spark starting next week and see if we can get it done!

@adamgfraser
Copy link
Contributor

We already run tests on Scala Native.

@sideeffffect
Copy link
Member

I see, it was this PR #8080
Those are great news! 🎉
Thank you, Adam.

@adamgfraser
Copy link
Contributor

No problem! Excited to see what you do with it!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants