Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@luis3m
Copy link
Contributor

@luis3m luis3m commented May 23, 2020

Closes #3682

@luis3m luis3m requested a review from iravid as a code owner May 23, 2020 00:10
@luis3m luis3m force-pushed the mergeWith-queue branch 2 times, most recently from af6d0a8 to 4979220 Compare May 23, 2020 00:23
@luis3m
Copy link
Contributor Author

luis3m commented May 23, 2020

I will update at some point tomorrow to fix Dotty compilation issue. I think I can also get rid of the fiber ref

Edit: Done, fixed dotty compilation issue and refactored the solution

@luis3m luis3m force-pushed the mergeWith-queue branch 2 times, most recently from 7f65a41 to e68e435 Compare May 23, 2020 03:59
Copy link
Member

@regiskuckaertz regiskuckaertz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @luis3m ! Thanks a lot picking this up 💪 I think there's a few edge cases to iron out and I've left a few comments below. Let me know if it makes sense.

@luis3m
Copy link
Contributor Author

luis3m commented May 23, 2020

Taking a look why zipWithLatest fails

)
}
for {
handoff <- ZStream.Handoff.make[Take[E1, O3]].toManaged_
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually an interesting approach we could take for all concurrent combinators. It's easier to reason about because it doesn't introduce any implicit buffers. If the user wants a buffer, they can easily add one with .buffer.

@regiskuckaertz

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes! and Handoff is quite nice as it acts as a semaphore between the two fibers as well.

@luis3m
Copy link
Contributor Author

luis3m commented May 23, 2020

@iravid by changing . forkManaged with .toManaged_.fork I got zipWithLatest passing as it was hanging.

I suppose I must add fibers interruption back for those cases where one fiber finished but the other is still trying to pull.

Edit: tried a test case but it seems interrupting fibers isn't needed anyway 🤔

@luis3m luis3m force-pushed the mergeWith-queue branch from bf21963 to 3cb7738 Compare May 23, 2020 16:21
@regiskuckaertz
Copy link
Member

Great work! I think it's almost there, there's that end of stream invariant to sort out but you may have a better idea than I did.

@luis3m luis3m force-pushed the mergeWith-queue branch from 3cb7738 to 5c425c9 Compare May 23, 2020 21:31
@luis3m
Copy link
Contributor Author

luis3m commented May 23, 2020

Solved conflicts

@luis3m
Copy link
Contributor Author

luis3m commented May 23, 2020

@regiskuckaertz I'd been trying to figure out why the following code wasn't working.

for {
  done   <- done.get
  take   <- if (done.contains(true)) handoff.poll.some else handoff.take
  result <- take.done // previously IO.done(take)
} yield result

By moving back to RefM I got it working as it will change the state after it offers to the queue, not before as the current implementation does. Though, to be honest I don't understand yet why using Ref + handoff.take (only) works as opposed to Ref + handoff.take / handoff.poll.

Edit: I guess only handoff.take works in the current test cases because it waits, whereas handoff.poll does not. Given that Ref will change the state before any production, it's possible that handoff.poll will terminate the stream right before it actually should.

RefM will offer to Handoff and then update the state which means handoff.poll will happen after both fibers finished producing chunks.

@luis3m
Copy link
Contributor Author

luis3m commented May 24, 2020

  1. Updated with requested change about poll
  2. Added a test case which fails with Ref. Passing with RefM

@luis3m luis3m requested a review from regiskuckaertz May 24, 2020 02:00
regiskuckaertz
regiskuckaertz previously approved these changes May 24, 2020
Copy link
Member

@regiskuckaertz regiskuckaertz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great detective work! This implementation turns out to be much better in many ways 🎉

Comment on lines 2020 to 2021
_ <- handler(chunksL.map(_.map(l)), List(L, E).contains(strategy)).fork.toManaged_
_ <- handler(chunksR.map(_.map(r)), List(R, E).contains(strategy)).fork.toManaged_
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
_ <- handler(chunksL.map(_.map(l)), List(L, E).contains(strategy)).fork.toManaged_
_ <- handler(chunksR.map(_.map(r)), List(R, E).contains(strategy)).fork.toManaged_
_ <- handler(chunksL.map(_.map(l)), List(L, E).contains(strategy)).forkManaged
_ <- handler(chunksR.map(_.map(r)), List(R, E).contains(strategy)).forkManaged

Tests pass with this change, and we get the benefit of knowing that the running streams are definitely interrupted before their ZManaged finalizers run.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah zipWithLatest is failing. I'll check why...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iravid yes, I had that one failing with forkManaged

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pulling was hanging for some reason. Both fibers were stuck and the test was timing out

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like a bug in Handoff, I think. Digging into it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well it's definitely something with the TestClock. The test hangs in the schedule's sleep.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you looking into it or is this something Adam should check?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, found the problem. We're just using it wrong. The adjust call happens concurrently with the fibers sleeping in the stream. Checking how best to fix it now. Basically the same thing as assertWithChunkCoordination fixes.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I must have not saved the file between running the tests. Changing forkDaemon -> fork in ZManaged#fork does fix the problem.

Now I'm pretty sure it's because the TestClock doesn't see the producer fibers suspending.

For now @luis3m let's change this from .fork.toManaged_ to .fork.interruptible.toManaged(_.interrupt). This works correctly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@iravid done

Copy link
Member

@iravid iravid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome work @luis3m!

@iravid iravid merged commit 242639e into zio:master May 24, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Replace the implementation of ZStream#merge with one that doesn't race effects

3 participants