Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

kvark
Copy link
Contributor

@kvark kvark commented Dec 7, 2020

This is one of the API changes that we probably need in order to be ready for multi-queue support down the road (#1169 and friends).

Investigation bits:


Preview | Diff

@github-actions
Copy link
Contributor

github-actions bot commented Dec 7, 2020

Previews, as seen at the time of posting this comment:
WebGPU | IDL
WGSL
3cc422a

@kainino0x
Copy link
Contributor

kainino0x commented Dec 7, 2020

My concerns with this are:

  • Slight extra verbosity is kind of meaningless for single-queue apps.
  • Ties command buffer to a single queue instead of a whole queue family (even though we don't have such a concept in WebGPU now).
  • What if we added some kind of "virtual" reusable command buffer in the future that wasn't tied to a queue? (Probably wouldn't happen.)

I think I'm slightly in favor of keeping createCommandEncoder where it is, but adding an optional GPUCommandEncoderDescriptor.queue member which defaults to defaultQueue. No strong opinion though.

@kainino0x kainino0x requested a review from Kangz December 7, 2020 22:42
@Kangz
Copy link
Contributor

Kangz commented Dec 8, 2020

+1 to @kainino0x's concerns. (and I think we'd like reusable command buffers at some point, but IDK how well it would work in wgpu to have them work for all queue types)

@kvark
Copy link
Contributor Author

kvark commented Dec 8, 2020

Let me try to downplay the concerns :)

Slight extra verbosity is kind of meaningless for single-queue apps.

Command encoders are created in only a few places by the code. Most command recording in the apps is expected to work with an existing encoder. So we are talking about just a few .defaultQueue additions, and that's assuming the users will not have the queue accessible separately (some will).

Ties command buffer to a single queue instead of a whole queue family (even though we don't have such a concept in WebGPU now)

Right. We could have a discussion about queue families. It would need to be resolved before we proceed with the PR.

What if we added some kind of "virtual" reusable command buffer in the future that wasn't tied to a queue?

First time I'm hearing about this. Note that "reusability" of command buffers can be implemented without the "virtual" aspect, and it will likely be more efficient, since it would use VkCommandBuffer and D3D12CommandList.

I think I'm slightly in favor of keeping createCommandEncoder where it is, but adding an optional queue member which defaults to defaultQueue

Are you talking about a member of GPUCommandEncoderDescriptor? I suppose it would be a nullable member then.
It sounds good in this case, I haven't thought of that.

@kvark kvark changed the title Move command encoding to a queue Associate command encoding with a queue Dec 8, 2020
@github-actions
Copy link
Contributor

github-actions bot commented Dec 8, 2020

Previews, as seen at the time of posting this comment:
WebGPU | IDL
WGSL
0f05d2d

Copy link
Contributor

@kainino0x kainino0x left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you talking about a member of GPUCommandEncoderDescriptor? I suppose it would be a nullable member then.
It sounds good in this case, I haven't thought of that.

Yes, exactly, sorry for the typo.

Copy link
Contributor

@Kangz Kangz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having queue in the descriptors LGTM

@kvark
Copy link
Contributor Author

kvark commented Jul 26, 2021

Consider this blocked on #1977

@kvark
Copy link
Contributor Author

kvark commented Aug 16, 2021

Editors discussion: we'd want to know how multi-queue works with queue families. If WebGPU exposes queue families, it would be a better API to have GPUCommandEncoderDescriptor knowing about the queue family, not the exact queue. This is not the same as creating the command encoder on a queue. Something like:

dictionary GPUCommandEncoderDescriptor {
  type: GPUCommandType, // "general", "compute", "transfer"
};

With this in mind, our recommendation is to not proceed with this PR for now. Instead, try to think more about how multi-queue is exposed, possibly basing off #1306 in the direction that DX12 does it.

@kvark kvark closed this Aug 30, 2021
ben-clayton pushed a commit to ben-clayton/gpuweb that referenced this pull request Sep 6, 2022
…1417)

This PR adds unimplmented stubs for the read-write-modify atomic operations.

 * `atomicAdd`
 * `atomicAnd`
 * `atomicCompareExchangeWeak`
 * `atomicExchange`
 * `atomicMax`
 * `atomicMin`
 * `atomicOr`
 * `atomicSub`
 * `atomicXor`

Issue gpuweb#1275, gpuweb#1276, gpuweb#1277, gpuweb#1278, gpuweb#1279, gpuweb#1280, gpuweb#1281, gpuweb#1282, gpuweb#1283
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
multi-queue Part of the multi-queue feature
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants