Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
146 changes: 138 additions & 8 deletions spec/index.bs
Original file line number Diff line number Diff line change
Expand Up @@ -838,6 +838,8 @@ dictionary GPULimits {
GPUSize32 maxStorageBuffersPerShaderStage = 4;
GPUSize32 maxStorageTexturesPerShaderStage = 4;
GPUSize32 maxUniformBuffersPerShaderStage = 12;
GPUSize32 maxTextureSize = 8192;
GPUSize32 maxTextureLayers = 256;
};
</script>

Expand Down Expand Up @@ -938,6 +940,23 @@ dictionary GPULimits {
across all {{GPUPipelineLayoutDescriptor/bindGroupLayouts}}
when creating a {{GPUPipelineLayout}}.

Higher is [=better=].

: <dfn>maxTextureSize</dfn>
::
The maximum size in a single dimension of a texture.

Note: This is only used in the width dimension for 1D textures, and the width & height dimensions for 2D textures.
This isn't used at all for 3D textures.

Higher is [=better=].

: <dfn>maxTextureLayers</dfn>
::
The maximum number of layers in an array texture.

Note: This is also used for the size limit of 3D textures.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All three dimensions are tied to maxTextureLayers?

Is there a fundamental reason this is true (i.e. coming from the native APIs)? If not, we should probably split up the limits in case there is hardware out there where this would be overly restrictive.

Also would you mind updating https://github.com/gpuweb/gpuweb/blob/master/design/Limits.md to document where these numbers come from?


Higher is [=better=].
</dl>

Expand Down Expand Up @@ -1178,7 +1197,7 @@ interface GPUBufferUsage {

1. If this call doesn't follow the [$createBuffer Valid Usage$]:

1. Retun an error buffer.
1. Return an error buffer.

Issue(gpuweb/gpuweb#605): Explain that the resulting error buffer can still be mapped at creation.

Expand Down Expand Up @@ -1423,6 +1442,10 @@ Issue: define <dfn dfn>mipmap level</dfn>, <dfn dfn>array layer</dfn>, <dfn dfn>

## <dfn interface>GPUTexture</dfn> ## {#texture-interface}

{{GPUTexture|GPUTextures}} are created via
{{GPUDevice/createTexture(descriptor)|GPUDevice.createTexture(descriptor)}}
that returns a new texture.

<script type=idl>
[Serializable]
interface GPUTexture {
Expand Down Expand Up @@ -1493,6 +1516,104 @@ interface GPUTextureUsage {
};
</script>

### <dfn method for=GPUDevice>createTexture(descriptor)</dfn> ### {#GPUDevice-createTexture}

<div algorithm="GPUDevice.createTexture">
**Arguments:**
- {{GPUTextureDescriptor}} |descriptor|

**Returns:** {{GPUTexture}}

1. If device is lost, or if this call doesn't follow the [$createTexture Valid Usage$], return an error texture.
1. Let |t| be a new {{GPUTexture}} object.
1. Set |t|.{{GPUTexture/[[textureSize]]}} to |descriptor|.{{GPUTextureDescriptor/size}}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you consider just making the whole |descriptor| to be in the internal slot?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be? I was just copying how buffers did it.

Also, that reminds me, are slots copied by value or by reference? It would be bad if the spec said that modifying a descriptor after creating the texture caused the texture to be modified.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be just easier to specify if you say the descriptor is copied by value into the internal slot.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately right now copying patterns from other parts of the spec doesn't work well because they haven't been copyedited.

I want to change more of the spec to use this pattern (just copying the whole descriptor directly into an internal slot).

1. Set |t|.{{GPUTexture/[[sampleCount]]}} to |descriptor|.{{GPUTextureDescriptor/sampleCount}}.
1. Set |t|.{{GPUTexture/[[dimension]]}} to |descriptor|.{{GPUTextureDescriptor/dimension}}.
1. Set |t|.{{GPUTexture/[[format]]}} to |descriptor|.{{GPUTextureDescriptor/format}}.
1. Set |t|.{{GPUTexture/[[textureUsage]]}} to |descriptor|.{{GPUTextureDescriptor/usage}}.
1. Return |t|.

<div algorithm>
<dfn abstract-op>maximum mipLevel count</dfn>(dimension, size)
**Arguments:**
- {{GPUTextureDescriptor/dimension}} |dimension|
- {{GPUTextureDescriptor/size}} |size|

1. Calculate the values of |w|, |h|, and |d|. If the |dimension| is "1d":

1. Let |w| = |size|.[=Extent3D/width=].
1. Let |h| = 1.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't Extent3D/height guaranteed to be 1 for 1D textures? I.e. can we just use Extent3D/width/height/depth for w/h/d instead of having this switch over the dimension?

1. Let |d| = 1.

1. Else if the |dimension| is "2d":

1. Let |w| = |size|.[=Extent3D/width=].
1. Let |h| = |size|.[=Extent3D/height=].
1. Let |d| = 1.

1. Else (the |dimension| is "3d"):

1. Let |w| = |size|.[=Extent3D/width=].
1. Let |h| = |size|.[=Extent3D/height=].
1. Let |d| = |size|.[=Extent3D/depth=].

1. Let |m| = the maximum value of |w|, |h|, and |d|.
1. Return one plus the greatest integral value of |x| for which 2^|x| <= |m|.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pretty sure this will work:

Suggested change
1. Return one plus the greatest integral value of |x| for which 2^|x| <= |m|.
1. Return one plus the greatest integral value of |x| for which 2<sup>|x|</sup> &le; |m|.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this be < instead of <=? If m is already a power of two, say 2^k, we consider the mipmap count to be k, while the current argument would produce k+1

</div>

<div algorithm class=validusage>
<dfn abstract-op>createTexture Valid Usage</dfn>
Given a {{GPUDevice}} |this| and a {{GPUTextureDescriptor}} |descriptor|
Comment on lines +1565 to +1566
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: indent level changes between these two lines.

Also I'd put a paragraph break between the dfn and "Given".

Suggested change
<dfn abstract-op>createTexture Valid Usage</dfn>
Given a {{GPUDevice}} |this| and a {{GPUTextureDescriptor}} |descriptor|
<dfn abstract-op>createTexture Valid Usage</dfn>
Given a {{GPUDevice}} |this| and a {{GPUTextureDescriptor}} |descriptor|

the following validation rules apply:

1. |this| must be a [=valid=] {{GPUDevice}}.
1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/width=] must be nonzero.
1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/height=] must be nonzero.
1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/depth=] must be nonzero.
1. |descriptor|.{{GPUTextureDescriptor/mipLevelCount}} must be nonzero.
1. |descriptor|.{{GPUTextureDescriptor/sampleCount}} must be nonzero.
Comment on lines +1570 to +1574
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I know this is expressed by the types, but "greater than zero" would be slightly clearer here than "nonzero"

1. If |descriptor|.{{GPUTextureDescriptor/dimension}} is "1d":

1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/width=] must be less than or equal to the owning {{GPUDevice}}'s {{GPULimits/maxTextureSize}}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: The "owning GPUDevice" is |this|

1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/height=] must be less than or equal to the owning {{GPUDevice}}'s {{GPULimits/maxTextureLayers}}.
1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/depth=] must be 1.
1. |descriptor|.{{GPUTextureDescriptor/sampleCount}} must be 1.

1. Else if |descriptor|.{{GPUTextureDescriptor/dimension}} is "2d":

1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/width=] must be less than or equal to the owning {{GPUDevice}}'s {{GPULimits/maxTextureSize}}.
1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/height=] must be less than or equal to the owning {{GPUDevice}}'s {{GPULimits/maxTextureSize}}.
1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/depth=] must be less than or equal to the owning {{GPUDevice}}'s {{GPULimits/maxTextureLayers}}.

1. Else (|descriptor|.{{GPUTextureDescriptor/dimension}} is "3d"):

1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/width=] must be less than or equal to the owning {{GPUDevice}}'s {{GPULimits/maxTextureLayers}}.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For 3D textures I think each dimension should use maxTextureSize instead of maxTextureLayers, or maybe we could have split limits for 1D, 2D and 3D (3D's default limit can't be more than 2048 because that's what the majority of Vulkan devices support)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that we have to split here:

  • maxTextureSize1D = 8192
  • maxTextureSize2D = 8192
  • maxTextureSize3D = 2048
  • maxTextureLayers = 256

1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/height=] must be less than or equal to the owning {{GPUDevice}}'s {{GPULimits/maxTextureLayers}}.
1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/depth=] must be less than or equal to the owning {{GPUDevice}}'s {{GPULimits/maxTextureLayers}}.
1. |descriptor|.{{GPUTextureDescriptor/sampleCount}} must be 1.

1. If |descriptor|.{{GPUTextureDescriptor/sampleCount}} > 1:

1. |descriptor|.{{GPUTextureDescriptor/mipLevelCount}} must be 1.
1. |descriptor|.{{GPUTextureDescriptor/size}}.[=Extent3D/depth=] must be 1.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also say that the dimension has to be "2d", i.e. disallow 1D multisampled textures (for now)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is done in that dimension's list of validation checks, but I think it would be good to have all sample count constraints in the same place.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't get Bikeshed to give me nested "if"s :(

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it works if you add enough indentation. But sometimes 2 spaces is enough while other times you need 4 spaces. It's a bit weird.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally it requires 4 spaces; I think it usually treats 2 spaces similarly to 0 spaces.

1. |descriptor|.{{GPUTextureDescriptor/usage}} must not include the {{GPUTextureUsage/STORAGE}} bit.
1. |descriptor|.{{GPUTextureDescriptor/format}} must not be a [=compressed format=].

1. |descriptor|.{{GPUTextureDescriptor/mipLevelCount}} must be less than or equal to [$maximum mipLevel count$](|descriptor|.{{GPUTextureDescriptor/dimension}}, |descriptor|.{{GPUTextureDescriptor/size}}).

1. If |descriptor|.{{GPUTextureDescriptor/format}} is a [=depth or stencil format=]:

1. |descriptor|.{{GPUTextureDescriptor/dimension}} must be "2d".
1. |descriptor|.{{GPUTextureDescriptor/sampleCount}} must be 1.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Could go with the rest of the multisampled texture validation.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't get Bikeshed to give me nested "if"s :(


1. |descriptor|.{{GPUTextureDescriptor/usage}} must be a combination of {{GPUTextureUsage}} values.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is some missing validation:

  • first mip level width and height is a multiple of the compressed block size.
  • compressed textures can only be copySrc / copyDst / Sampled.
  • other usage constraints (we have non-renderable types), or add an issue for it. Dawn's Format.cpp summarizes what we have found by investigating the various APIs, and all usages of all formats are tested.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unextended WebGPU doesn't have any compressed textures, and we haven't added any extensions to add any yet, and our implementation doesn't support compressed textures yet, so I'm not comfortable adding rules for compressed textures. I can open an issue for it.

I'll open a new issue for usage constraints. AFAICT Metal doesn't have any constraints here, so I'm not sure what to restrict.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"texture-compression-bc" is already there although the formats enums aren't in the spec yet. We're pretty comfortable adding text for compressed textures because @Jiawei-Shao did extensive investigation and testing in Dawn so we know they work in all APIs with the current validation constraints.

There's a bit a of chicken and egg issues because you've voiced concern about adding the extension because we didn't have validation for it, and now you have concerns adding the validation because we don't have the extension yet ^^.

That said we can do compressed texture validation in a follow-up if it feels more comfortable.

Metal does have a large number of usage constraints on texture that you can find starting page 6 of the Metal Feature Set document. We can add that independently since it will likely be a large PR in itself.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW though there are no compressed textures in the spec, there is a lot of setup for them (block based formats/texel blocks). You can rely on that ("block size > 1") to talk about compressed formats.

In this case: In practice this should be correct, but if we want we could eventually add an explicit flag to a texture format capability table that says "supports multiple samples".

1. |descriptor|.{{GPUTextureDescriptor/sampleCount}} must be either 1 or 4.

</dfn>
</div>
</div>

## GPUTextureView ## {#gpu-textureview}

<script type=idl>
Expand Down Expand Up @@ -1662,14 +1783,23 @@ enum GPUTextureFormat {
};
</script>

* The `depth24plus` family of formats ({{GPUTextureFormat/depth24plus}} and
{{GPUTextureFormat/depth24plus-stencil8}})
must have a depth-component precision of
1 ULP &le; 1 / (2<sup>24</sup>).
The following texture formats are considered <dfn dfn for="GPUTextureFormat">depth or stencil format</dfn>s:

- "depth32float"
- "depth24plus"
- "depth24plus-stencil8"

There are no <dfn dfn for="GPUTextureFormat">compressed format</dfn>s defined in unextended WebGPU.
Extensions may define some, though.

The `depth24plus` family of formats ({{GPUTextureFormat/depth24plus}} and
{{GPUTextureFormat/depth24plus-stencil8}})
must have a depth-component precision of
1 ULP &le; 1 / (2<sup>24</sup>).

Note: This is unlike the 24-bit unsigned normalized format family typically
found in native APIs, which has a precision of
1 ULP = 1 / (2<sup>24</sup> &minus; 1).
Note: This is unlike the 24-bit unsigned normalized format family typically
found in native APIs, which has a precision of
1 ULP = 1 / (2<sup>24</sup> &minus; 1).

<script type=idl>
enum GPUTextureComponentType {
Expand Down