Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

Kangz
Copy link
Contributor

@Kangz Kangz commented Apr 15, 2020

This also does a number of cleanups to match the style of other WebGPU
functions with the valid usage section.


Preview | Diff

@Kangz Kangz requested review from kvark, kainino0x and JusSn April 15, 2020 13:19
::
The range of this {{GPUBuffer}} that is mapped.

: <dfn>\[[mapped_ranges]]</dfn> of type `sequence<ArrayBuffer>` or `null`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do I understand correctly that these may intersect? It's not obvious. If so, would be worth to mention that in the spec, and what the behavior is going to be.
For example, can I move one of them to a different worker? What happens if I try to read/write the same range in multiple ArrayBuffer objects from different workers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The spec for getMappedRange says:

Let m be a new ArrayBuffer of size size pointing at the content of this.[[mapping]] at offset offset - this.[[mapping_range]][0].

Agreed that it isn't super clear, I'll call out that the mapped ranges can overlap in a note.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so what about the act of writing to the same sub-range from different workers?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are ArrayBuffer and not SharedArrayBuffer. Also there is a TODO to talk about worker restrictions and I added that getMappedRange can only be called on the worker on which mapAsync was called.

It's an extremely unfortunate restriction, but I don't see a better way to do things in JS. Open to suggestions though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. So if the user tries to send an array buffer to another thread, it gets detached, and no harm is done.
I'm still a bit concerned about multiple array buffers seeing the same memory, will reach out to our folks to see if that's ok.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm confident this is ok, because you can already do that in JS with the following code:

let a = new ArrayBuffer(12);
let b = new Uint32Array(a, 0, 2);
let c = new Uint32Array(a, 4, 2);
b // Uint32Array(2) [0, 0]
c // Uint32Array(2) [0, 0]
b[1] = 42;
c[0] // 42

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's only one ArrayBuffer, though; this is multiple. I'd worry if engines might assume aliasing is impossible with ArrayBuffers (and data can't change underneath you).

SharedArrayBuffers would almost certainly be OK though, since data can change underneath them normally.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if the user tries to send an array buffer to another thread, it gets detached

But it's attached on the other thread. This would allow you to have ArrayBuffers on multiple threads that reference the same data, making them behave as SABs.

I think we should pick one of these options:

  • Returns SAB and you can send the SAB to other threads. Problem: we need to be able to detach, but SABs can't normally be detached; engines may rely upon this invariant. Also unmap races.
  • Returns a special AB that's non-transferable (could still be serializable, we'd just copy the data). Disallow getMappedRange except from the mapping thread.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Like you noted, option 1) is not doable so it has to be option 2).

spec/index.bs Outdated
1, |this| must be a [=valid=] {{GPUDevice}}.
1. |descriptor|.{{GPUBufferDescriptor/usage}} must be a subset of |this|.[[allowed buffer usages]].
1. If |descriptor|.{{GPUBufferDescriptor/usage}} contains {{GPUBufferUsage/MAP_READ}} then
it must be a subset of {{GPUBufferUsage/MAP_READ}} | {{GPUBufferUsage/COPY_DST}}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, if I see a mapAsync call somewhere, in order to understand what this can do with a buffer, I need to find out how it's created and whether it was MAP_READ here or MAP_WRITE?
I think your earlier proposal with separate calls was clearer in this sense.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While I agree, that's not the takeaway from last call. If we need to debate this more, let's do it independently of this PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think there was a strong yes from you or anybody on the need to have a single call. It was rather "maybe?", and now that we see cleaner how this would look (thanks for the PR!) at least on my side the "maybe?" leans more towards "no".

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We discussed this offline with @kvark @kainino0x and @jdashg another possibility would be to add a MapFlags argument to mapAsync that would have to match the MAP_* usage for now. It allows future extensibility and makes it clear at the map call which it is.

::
The range of this {{GPUBuffer}} that is mapped.

: <dfn>\[[mapped_ranges]]</dfn> of type `sequence<ArrayBuffer>` or `null`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. So if the user tries to send an array buffer to another thread, it gets detached, and no harm is done.
I'm still a bit concerned about multiple array buffers seeing the same memory, will reach out to our folks to see if that's ok.

spec/index.bs Outdated
1. If |descriptor|.{{GPUBufferDescriptor/usage}} contains {{GPUBufferUsage/MAP_READ}} then
it must be a subset of {{GPUBufferUsage/MAP_READ}} | {{GPUBufferUsage/COPY_DST}}.
1. If |descriptor|.{{GPUBufferDescriptor/usage}} contains {{GPUBufferUsage/MAP_WRITE}} then
it must be a subset of {{GPUBufferUsage/MAP_WRITE}} | {{GPUBufferUsage/COPY_SRC}}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Supposing my staging buffer is for uploads (MAP_WRITE + COPY_SRC). Since we are always providing the contents that they have, and they can always read them, it sounds like MAP_READ is automatically supported, even if it's useless? I wonder if we should rename MAP_WRITE to just MAP or something like MAP_READ_WRITE or MAP_RW, etc

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this distinction should remain somehow because MAP_READ can be used as a COPY_DST but we don't want your staging buffer to be one, for example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Exactly, right now if you can't wrap shmem in GPU resources, the only things that needs to happen for uploads via mapAsync is that on unmap the ranges of data used in getMappedRange (or the whole mapAsync) need to be pushed to the GPU process.

When calling mapAsync no data needs to be moved from the GPU process to the content process because the GPU cannot write to it, and the data kept resident in the content process is already up to date.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JusSn I'm not saying we should change MAP_READ - it's useful and needs to stay. I'm saying that MAP_WRITE could be renamed to something that indicates that it can be read as well, since only COPY_SRC is allowed with it, and there is no problem for reading it on CPU.

spec/index.bs Outdated
1. If |descriptor|.{{GPUBufferDescriptor/usage}} contains {{GPUBufferUsage/MAP_READ}} then
it must be a subset of {{GPUBufferUsage/MAP_READ}} | {{GPUBufferUsage/COPY_DST}}.
1. If |descriptor|.{{GPUBufferDescriptor/usage}} contains {{GPUBufferUsage/MAP_WRITE}} then
it must be a subset of {{GPUBufferUsage/MAP_WRITE}} | {{GPUBufferUsage/COPY_SRC}}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this distinction should remain somehow because MAP_READ can be used as a COPY_DST but we don't want your staging buffer to be one, for example.

1. Throw an {{OperationError}}.

Issue: Specify that the rejection happens on the device timeline.
1. Let |m| be a new {{ArrayBuffer}} of size |size| pointing at the content of |this|.{{[[mapping]]}} at offset |offset| - |this|.{{[[mapping_range]]}}[0].
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
1. Let |m| be a new {{ArrayBuffer}} of size |size| pointing at the content of |this|.{{[[mapping]]}} at offset |offset| - |this|.{{[[mapping_range]]}}[0].
1. Let |m| be a new {{ArrayBuffer}} of size |size| pointing at the content of |this|.{{[[mapping]]}} at offset |offset|.

Right? If both |offset| and |this|.{{[[mapping_range]]}}[0] are relative to the start of the |this|.{{[[mapping]]}}?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is confusing, but I think the original text is correct because [[mapping]] was created in the following line:

Set the content of m to the content of this’s allocation starting at offset offset and for size bytes.

Set this.[[mapping]] to m.

[[mapping_range]] is relative to the whole allocation of this, same for offset and size of getMappedRange, but [[mapping]] is a subrange of the allocation. This is all very confusing and I'd be happy to reword things if you find clearer names / explanations.

@kvark
Copy link
Contributor

kvark commented Apr 27, 2020

I talked to our JS folks with regards to intersecting array buffers. The conclusion is: we are fine with the intersection, but the offsets should be all aligned to 8 bytes. Otherwise, users wouldn't be able to create the typed views of them.

::
The range of this {{GPUBuffer}} that is mapped.

: <dfn>\[[mapped_ranges]]</dfn> of type `sequence<ArrayBuffer>` or `null`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's only one ArrayBuffer, though; this is multiple. I'd worry if engines might assume aliasing is impossible with ArrayBuffers (and data can't change underneath you).

SharedArrayBuffers would almost certainly be OK though, since data can change underneath them normally.

::
The range of this {{GPUBuffer}} that is mapped.

: <dfn>\[[mapped_ranges]]</dfn> of type `sequence<ArrayBuffer>` or `null`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if the user tries to send an array buffer to another thread, it gets detached

But it's attached on the other thread. This would allow you to have ArrayBuffers on multiple threads that reference the same data, making them behave as SABs.

I think we should pick one of these options:

  • Returns SAB and you can send the SAB to other threads. Problem: we need to be able to detach, but SABs can't normally be detached; engines may rely upon this invariant. Also unmap races.
  • Returns a special AB that's non-transferable (could still be serializable, we'd just copy the data). Disallow getMappedRange except from the mapping thread.

@kainino0x
Copy link
Contributor

[editors call] Since we decided not to land this right now anyway, we may as well update it to require disjoint ranges before landing.

@Kangz
Copy link
Contributor Author

Kangz commented May 5, 2020

Addressed comments raised in the meeting. Let me know if I can go ahead with a squash + rebase.

@kainino0x
Copy link
Contributor

Please do, and we'll probably go ahead with landing after writeBuffer.

This also does a number of cleanups to match the style of other WebGPU
functions with the valid usage section.
@Kangz
Copy link
Contributor Author

Kangz commented May 5, 2020

Rebased and squashed.

@kainino0x
Copy link
Contributor

Per the meeting let's get this merged with the outstanding things to discuss:

@kainino0x kainino0x merged commit bd36513 into gpuweb:master May 7, 2020
@Kangz Kangz deleted the map_async branch May 7, 2020 16:54
bors bot added a commit to gfx-rs/wgpu that referenced this pull request Jun 2, 2020
675: New map_async logic r=cwfitzgerald a=kvark

Matches upstream changes in gpuweb/gpuweb#708 and gpuweb/gpuweb#796
TODO:
- wgpu-native PR
- wgpu-rs gfx-rs/wgpu-rs#344

Co-authored-by: Dzmitry Malyshau <[email protected]>
JusSn pushed a commit to JusSn/gpuweb that referenced this pull request Jun 8, 2020
…b#708)

This also does a number of cleanups to match the style of other WebGPU
functions with the valid usage section.
JusSn pushed a commit to JusSn/gpuweb that referenced this pull request Jun 8, 2020
…b#708)

This also does a number of cleanups to match the style of other WebGPU
functions with the valid usage section.
ben-clayton pushed a commit to ben-clayton/gpuweb that referenced this pull request Sep 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants