-
Notifications
You must be signed in to change notification settings - Fork 335
Initial spec text for "navigator.gpu" + small fixes #479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
In shaders there are several texture types for each dimensionality depending on their component type. It can be either float, uint or sint, with maybe in the future depth/stencil if WebGPU allows reading such textures. The component type of a GPUTextureView's format must match the component type of its binding in the shader module. This is for several reasons: - Vulkan requires the following: "The Sampled Type of an OpTypeImage declaration must match the numeric format of the corresponding resource in type and signedness, as shown in the SPIR-V Sampled Type column of the Interpretation of Numeric Format table, or the values obtained by reading or sampling from this image are undefined." - It is also required in OpenGL for the texture units to be complete, a uint or sint texture unit used with a non-nearest sampler is incomplete and returns black texels. Similar constraints must exist in other APIs. To encode this compatibility constraint, a new member is added to GPUBindGroupLayoutBinding that is a new enum GPUTextureComponentType that give the component type of the texture.
This is the most common case and avoids having an optional dictionary member with no default value (but that still requires a value for texture bindings).
The following code acquires the default {{GPUDevice}}. | ||
|
||
<pre highlight="js"> | ||
navigator.gpu.requestAdapter().then(adapter => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe use await
since that's the new way of doing promises in JS?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
unsigned long maxSampledTexturesPerShaderStage = 16; | ||
unsigned long maxSamplersPerShaderStage = 16; | ||
unsigned long maxStorageBuffersPerPipelineLayout = 8; | ||
unsigned long maxStorageBuffersPerShaderStage = 4; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a functional change, and it looks good to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is more in line with Vulkan's limits that I meant to change a while back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind at some point making a centralized doc (e.g. design/Limits.md) that explains how we came to each of the limits? e.g. point to the Vulkan limit that this represents. I think that info is probably all in issues, but it would be easier to find.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure. To clarify, something like this should be separate from the spec itself?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think so. It's more of an appendix since it references the other API spec/docs. And it'll be easy to fold in later if we want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. I don't think we need a ton of text in this section. Few nits/suggestions.
spec/index.bs
Outdated
}; | ||
</script> | ||
|
||
A {{GPU}} object is the entry point to the WebGPU API. It is used to create [=adapters=]. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: I'd move this above the IDL, and make this the definition with <dfn interface>GPU</dfn>
unsigned long maxSampledTexturesPerShaderStage = 16; | ||
unsigned long maxSamplersPerShaderStage = 16; | ||
unsigned long maxStorageBuffersPerPipelineLayout = 8; | ||
unsigned long maxStorageBuffersPerShaderStage = 4; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you mind at some point making a centralized doc (e.g. design/Limits.md) that explains how we came to each of the limits? e.g. point to the Vulkan limit that this represents. I think that info is probably all in issues, but it would be easier to find.
Co-Authored-By: Kai Ninomiya <[email protected]>
Co-Authored-By: Kai Ninomiya <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... minus the limit part that needs an explainer (an issue would work too!)
* Add texture size tests for 1D/2D/3D textures * Update webgpu/types to 0.0.42 for texture dimension limits * Fixes/Skip: total texture footprint issue, compressed texture alignment issue
Updates #471.
Unfortunately it's not much; thought this would be more but not sure what else needs to go here.
My intention with the examples is to add more if/when device creation gets more complicated. Does GPUAdapterOptions/"power-preference" need an example in its current state? There isn't a way to query what kind of GPU was provided, since it's just a hint.
Preview | Diff