-
Notifications
You must be signed in to change notification settings - Fork 336
Provide texture dimension in GPUBindGroupLayoutBinding #339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, though that's a bit unfortunate because it makes the BindGroupLayout
even more complicated. On the plus side we can validate compatibility the declaration of the texture in the ShaderModule
better which might helpl avoid some undefined behavior.
@Kangz I'm not worried about complexity here. The users need to know what kind of resources they are going to bind. What I'm not sure about is if any ISV wants to actually use the same bind group layout with different texture types. Could be something to add to our list of questions for ISVs. @RafaelCintron would you want to have a look as well? |
Q: Why does needing to know the texture type for Metal mean we need to require the dimensions of the texture in WebGPU? |
@RafaelCintron Good question! Short answer is: because not knowing the dimensions at this point (bind group layout creation) in WebGPU would make Metal implementation use of argument buffers infeasible, if possible at all... I think it would be a good thing to design our API in such a way that effective translation is provided to native. In general, binding resources in Metal shows up high on the CPU profiles I've seen, and argument buffers largely solve this. |
Discussed at 25 Jun 2019 teleconference |
We also need a flag for multi-sampling |
spec/index.bs
Outdated
required GPUBindingType type; | ||
// For texture bindings only, we need to know the dimensions at the | ||
// bind group creation time and validate the created bind groups accordingly. | ||
GPUTextureViewDimension? textureDimension; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is the ?
needed? I think it can be removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not needed for buffer type bindings
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but since it's not required
, it should be optional by default. I don't think the extra ?
is needed because that just makes it optional AND nullable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we have problems with this elsewhere in the file too, actually.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, that was not quite right. The WebIDL spec explicitly allows nullable enumerations inside dictionaries. I'm not sure exactly what the difference is between:
GPUTextureViewDimension? textureDimension = null;
and
GPUTextureViewDimension textureDimension;
except that the first will convert undefined to null, and the second will not. The second will also not allow null, I think.
That said, I think one of those two should be used, rather than what we have now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't GPUTextureViewDimension?
default to null?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From what I understand (as in very little) in WebIDL GPUTetureViewDimension? textureDimension
would default to being undefined. Basically this is a Rust Option<Option<GPUTextureViewDimension>>
and not Option<GPUTextureDimension>
.
Either of Kai's suggestions would work to be an Option<GPUTextureViewDimension>
so I hae a slight preference for the shorter GPUTextureViewDimension textureDimension
.
Alternatively, we could maybe make it default to 2D
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for clarification! Removed ?
now.
I don't want to default to 2D
for now, since our bind group descriptor doesn't default either (although, texture descriptor does)
Updated now, please have another look! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, with the details of textureDimension
's type figured.
* Provide texture dimension in GPUBindGroupLayoutBinding * Add multisample flag to GPUBindGroupLayoutBinding * Remove question mark from texture dimension
This is required to have the bind groups backed by Metal indirect argument buffers. We want
MTLArgumentEncoder
to be associated withGPUBindGroupLayout
, and its construction requires textureType to be known.cc @litherum @JusSn
Preview | Diff