1. Introduction
This section is non-normative.
Graphics Processing Units, or GPUs for short, have been essential in enabling rich rendering and computational applications in personal computing. WebGPU is an API that exposes the capabilities of GPU hardware for the Web. The API is designed from the ground up to efficiently map to the Vulkan, Direct3D 12, and Metal native GPU APIs. WebGPU is not related to WebGL and does not explicitly target OpenGL ES.
WebGPU sees physical GPU hardware as GPUAdapter
s. It provides a connection to an adapter via GPUDevice
, which manages resources, and the device’s GPUQueue
s, which execute commands. GPUDevice
may have its own memory with high-speed access to the processing units. GPUBuffer
and GPUTexture
are the physical resources backed by GPU memory. GPUCommandBuffer
and GPURenderBundle
are containers for user-recorded commands. GPUShaderModule
contains shader code. The other resources,
such as GPUSampler
or GPUBindGroup
, configure the way physical resources are used by the GPU.
GPUs execute commands encoded in GPUCommandBuffer
s by feeding data through a pipeline,
which is a mix of fixed-function and programmable stages. Programmable stages execute shaders, which are special programs designed to run on GPU hardware.
Most of the state of a pipeline is defined by
a GPURenderPipeline
or a GPUComputePipeline
object. The state not included
in these pipeline objects is set during encoding with commands,
such as beginRenderPass()
or setBlendColor()
.
2. Security considerations
2.1. CPU-based undefined behavior
A WebGPU implementation translates the workloads issued by the user into API commands specific to the target platform. Native APIs specify the valid usage for the commands (for example, see vkCreateDescriptorSetLayout) and generally don’t guarantee any outcome if the valid usage rules are not followed. This is called "undefined behavior", and it can be exploited by an attacker to access memory they don’t own, or force the driver to execute arbitrary code.
In order to disallow insecure usage, the range of allowed WebGPU behaviors is defined for any input.
An implementation has to validate all the input from the user and only reach the driver
with the valid workloads. This document specifies all the error conditions and handling semantics.
For example, specifying the same buffer with intersecting ranges in both "source" and "destination"
of copyBufferToBuffer()
results in GPUCommandEncoder
generating an error, and no other operation occurring.
See § 22 Errors & Debugging for more information about error handling.
2.2. GPU-based undefined behavior
WebGPU shaders are executed by the compute units inside GPU hardware. In native APIs, some of the shader instructions may result in undefined behavior on the GPU. In order to address that, the shader instruction set and its defined behaviors are strictly defined by WebGPU. When a shader is provided, the WebGPU implementation has to validate it before doing any translation (to platform-specific shaders) or transformation passes.
2.3. Out-of-bounds access in shaders
Shaders can access physical resources either directly or via texture units, which are fixed-function hardware blocks that handle texture coordinate conversions. Validation on the API side can only guarantee that all the inputs to the shader are provided and they have the correct usage and types. The API side can not guarantee that the data is accessed within bounds if the texture units are not involved.
In order to prevent the shaders from accessing GPU memory an application doesn’t own, the WebGPU implementation may enable a special mode (called "robust buffer access") in the driver that guarantees that the access is limited to buffer bounds. Alternatively, an implementation may transform the shader code by inserting manual bounds checks.
If the shader attempts to load data outside of physical resource bounds, the implementation is allowed to:
-
return a value at a different location within the resource bounds
-
return a value vector of "(0, 0, 0, X)" with any "X"
-
partially discard the draw or dispatch call
If the shader attempts to write data outside of physical resource bounds, the implementation is allowed to:
-
write the value to a different location within the resource bounds
-
discard the write operation
-
partially discard the draw or dispatch call
2.4. Invalid data
When uploading floating-point data from CPU to GPU, or generating it on the GPU, we may end up with a binary representation that doesn’t correspond to a valid number, such as infinity or NaN (not-a-number). The GPU behavior in this case is subject to the accuracy of the GPU hardware implementation of the IEEE-754 standard. WebGPU guarantees that introducing invalid floating-point numbers would only affect the results of arithmetic computations and will not have other side effects.
2.5. Driver bugs
GPU drivers are subject to bugs like any other software. If a bug occurs, an attacker could possibly exploit the incorrect behavior of the driver to get access to unprivileged data. In order to reduce the risk, the WebGPU working group will coordinate with GPU vendors to integrate the WebGPU Conformance Test Suite (CTS) as part of their driver testing process, like it was done for WebGL. WebGPU implementations are expected to have workarounds for some of the discovered bugs, and support blacklisting particular drivers from using some of the native API backends.
2.6. Timing attacks
WebGPU is designed for multi-threaded use via Web Workers. Some of the objects,
like GPUBuffer
, have shared state which can be simultaneously accessed.
This allows race conditions to occur, similar to those of accessing a SharedArrayBuffer
from multiple Web Workers, which makes the thread scheduling observable
and allows the creation of high-precision timers.
The theoretical attack vectors are a subset of those of SharedArrayBuffer.
2.7. Denial of service
WebGPU applications have access to GPU memory and compute units. A WebGPU implementation may limit the available GPU memory to an application, in order to keep other applications responsive. For GPU processing time, a WebGPU implementation may set up "watchdog" timer that makes sure an application doesn’t cause GPU unresponsiveness for more than a few seconds. These measures are similar to those used in WebGL.
2.8. Fingerprinting
WebGPU defines the required limits and capabilities of any GPUAdapter
.
and encourages applications to target these standard limits. The actual result from requestAdapter()
may have better limits, and could be subject to fingerprinting.
3. Terminology & Conventions
3.1. Dot Syntax
In this specification, the .
("dot") syntax, common in programming languages, is used.
The phrasing "Foo.Bar
" means "the Bar
member of the value (or interface) Foo
."
For example, where buffer
is a GPUBuffer
, buffer.[[device]].[[adapter]]
means
"the [[adapter]]
internal slot of the [[device]]
internal slot of buffer
.
3.2. Coordinate Systems
WebGPU’s coordinate systems match DirectX and Metal’s coordinate systems in a graphics pipeline.
-
Y-axis is up in normalized device coordinate (NDC): point(-1.0, -1.0) in NDC is located at the bottom-left corner of NDC. In addition, x and y in NDC should be between -1.0 and 1.0 inclusive, while z in NDC should be between 0.0 and 1.0 inclusive. Vertices out of this range in NDC will not introduce any errors, but they will be clipped.
-
Y-axis is down in framebuffer coordinate, viewport coordinate and fragment/pixel coordinate: origin(0, 0) is located at the top-left corner in these coordinate systems.
-
Window/present coordinate matches framebuffer coordinate.
-
UV of origin(0, 0) in texture coordinate represents the first texel (the lowest byte) in texture memory.
3.3. Internal Objects
An internal object is a conceptual, non-exposed WebGPU object. Internal objects track the state of an API object and hold any underlying implementation. If the state of a particular internal object can change in parallel from multiple agents, those changes are always atomic with respect to all agents.
Note: An "agent" refers to a JavaScript "thread" (i.e. main thread, or Web Worker).
3.3.1. Invalid Objects
If an object is successfully created, it is valid at that moment. An internal object may be invalid. It may become invalid during its lifetime, but it will never become valid again.
-
If there is an error in the creation of an object, it is immediately invalid. This can happen, for example, if the object descriptor doesn’t describe a valid object, or if there is not enough memory to allocate a resource.
-
If an object is explicitly destroyed (e.g.
GPUBuffer.destroy()
), it becomes invalid. -
If the device that owns an object is lost, the object becomes invalid.
3.4. WebGPU Interfaces
A WebGPU interface is an exposed interface which encapsulates an internal object. It provides the interface through which the internal object's state is changed.
As a matter of convention, if a WebGPU interface is referred to as invalid, it means that the internal object it encapsulates is invalid.
Any interface which includes GPUObjectBase
is a WebGPU interface.
interface mixin {
GPUObjectBase attribute USVString ?label ; };
GPUObjectBase
has the following attributes:
label
, of type USVString, nullable-
A label which can be used by development tools (such as error/warning messages, browser developer tools, or platform debugging utilities) to identify the underlying internal object to the developer. It has no specified format, and therefore cannot be reliably machine-parsed.
In any given situation, the user agent may or may not choose to use this label.
GPUObjectBase
has the following internal slots:
[[device]]
, of type device, readonly-
An internal slot holding the device which owns the internal object.
3.5. Object Descriptors
An object descriptor holds the information needed to create an object,
which is typically done via one of the create*
methods of GPUDevice
.
dictionary {
GPUObjectDescriptorBase USVString label ; };
GPUObjectDescriptorBase
has the following members:
label
, of type USVString-
The initial value of
GPUObjectBase.label
.
4. Programming Model
4.1. Timelines
This section is non-normative.
A computer system with a user agent at the front-end and GPU at the back-end has components working on different timelines in parallel:
- Content timeline
-
Associated with the execution of the Web script. It includes calling all methods described by this specification.
- Device timeline
-
Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devices, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.
- Queue timeline
-
Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.
In this specification, asynchronous operations are used when the result value depends on work that happens on any timeline other than the Content timeline. They are represented by callbacks and promises in JavaScript.
GPUComputePassEncoder.dispatch()
:
-
User encodes a
dispatch
command by calling a method of theGPUComputePassEncoder
which happens on the Content timeline. -
User issues
GPUQueue.submit()
that hands over theGPUCommandBuffer
to the user agent, which processes it on the Device timeline by calling the OS driver to do a low-level submission. -
The submit gets dispatched by the GPU thread scheduler onto the actual compute units for execution, which happens on the Queue timeline.
GPUDevice.createBuffer()
:
-
User fills out a
GPUBufferDescriptor
and creates aGPUBuffer
with it, which happens on the Content timeline. -
User agent creates a low-level buffer on the Device timeline.
GPUBuffer.mapAsync()
:
-
User requests to map a
GPUBuffer
on the Content timeline and gets a promise in return. -
User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.
-
After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.
4.2. Memory
This section is non-normative.
Once a GPUDevice
has been obtained during an application initialization routine,
we can describe the WebGPU platform as consisting of the following layers:
-
User agent implementing the specification.
-
Operating system with low-level native API drivers for this device.
-
Actual CPU and GPU hardware.
Each layer of the WebGPU platform may have different memory types that the user agent needs to consider when implementing the specification:
-
The script-owned memory, such as an
ArrayBuffer
created by the script, is generally not accessible by a GPU driver. -
A user agent may have different processes responsible for running the content and communication to the GPU driver. In this case, it uses inter-process shared memory to transfer data.
-
Dedicated GPUs have their own memory with high bandwidth, while integrated GPUs typically share memory with the system.
Most physical resources are allocated in the memory of type that is efficient for computation or rendering by the GPU. When the user needs to provide new data to the GPU, the data may first need to cross the process boundary in order to reach the user agent part that communicates with the GPU driver. Then it may need to be made visible to the driver, which sometimes requires a copy into driver-allocated staging memory. Finally, it may need to be transferred to the dedicated GPU memory, potentially changing the internal layout into one that is most efficient for GPUs to operate on.
All of these transitions are done by the WebGPU implementation of the user agent.
Note: This example describes the worst case, while in practice
the implementation may not need to cross the process boundary,
or may be able to expose the driver-managed memory directly to
the user behind an ArrayBuffer
, thus avoiding any data copies.
4.3. Resource usage
Buffers and textures can be used by the GPU in multiple ways, which can be split into two groups:
- Read-only usages
-
Usages like
GPUBufferUsage.VERTEX
orGPUTextureUsage.SAMPLED
don’t change the contents of a resource. - Mutating usages
-
Usages like
GPUBufferUsage.STORAGE
do change the contents of a resource.
Consider merging all read-only usages. <https://github.com/gpuweb/gpuweb/issues/296>
Textures may consist of separate mipmap levels and array layers,
which can be used differently at any given time.
Each such subresource is uniquely identified by a texture, mipmap level, and
(for 2d
textures only) array layer.
The main usage rule is that any subresource at any given time can only be in either:
-
a combination of read-only usages
-
a single mutating usage
Enforcing this rule allows the API to limit when data races can occur when working with memory. That property makes applications written against WebGPU more likely to run without modification on different platforms.
Generally, when an implementation processes an operation that uses a subresource in a different way than its current usage allows, it schedules a transition of the resource
into the new state. In some cases, like within an open GPURenderPassEncoder
, such a
transition is impossible due to the hardware limitations.
We define these places as usage scopes:
each subresource must not change usage within the usage scope.
For example, binding the same buffer for GPUBufferUsage.STORAGE
as well as for GPUBufferUsage.VERTEX
within the same GPURenderPassEncoder
would put the encoder
as well as the owning GPUCommandEncoder
into the error state. Since GPUBufferUsage.STORAGE
is the only mutating usage for a buffer
that is valid inside a render pass, if it’s present,
this buffer can’t be used in any other way within this pass.
The subresources of textures included in the views provided to GPURenderPassColorAttachmentDescriptor.attachment
and GPURenderPassColorAttachmentDescriptor.resolveTarget
are considered to have OUTPUT_ATTACHMENT
for the usage scope of this render pass.
The physical size of a GPUTexture
subresource is the dimension of the GPUTexture
subresource in texels that includes the possible extra paddings to form complete texel blocks in the subresource.
-
For pixel-based
GPUTextureFormat
s, the physical size is always equal to the size of the subresource used in the sampling hardwares. -
GPUTexture
s in block-based compressedGPUTextureFormat
s always have a mipmap level 0 whose[[textureSize]]
is a multiple of the texel block size, but the lower mipmap levels might not be the multiple of the texel block size and can have paddings.
GPUTexture
in BC format whose [[textureSize]]
is {60, 60, 1}, when sampling
the GPUTexture
at mipmap level 2, the sampling hardware uses {15, 15, 1} as the size of the subresource,
while its physical size is {16, 16, 1} as the block-compression algorithm can only operate on 4x4 texel blocks. Document read-only states for depth views. <https://github.com/gpuweb/gpuweb/issues/514>
4.4. Synchronization
For each subresource of a physical resource, its set of usage flags is tracked on the Queue timeline. Usage flags are GPUBufferUsage
or GPUTextureUsage
flags,
according to the type of the subresource.
This section will need to be revised to support multiple queues.
On the Queue timeline, there is an ordered sequence of usage scopes. Each item on the timeline is contained within exactly one scope. For the duration of each scope, the set of usage flags of any given subresource is constant. A subresource may transition to new usages at the boundaries between usage scopes.
This specification defines the following usage scopes:
-
an individual command on a
GPUCommandEncoder
, such asGPUCommandEncoder.copyBufferToTexture
. -
an individual command on a
GPUComputePassEncoder
, such asGPUProgrammablePassEncoder.setBindGroup
. -
the whole
GPURenderPassEncoder
.
Note: calling GPUProgrammablePassEncoder.setBindGroup
adds the [[usedBuffers]]
and [[usedTextures]]
to the usage scope regardless of whether the shader or GPUPipelineLayout
actually depends on these bindings.
Similarly GPURenderEncoderBase.setIndexBuffer
add the index buffer to the usage scope (as GPUBufferUsage.INDEX
)
regardless of whether the indexed draw calls are used afterwards.
The usage scopes are validated at GPUCommandEncoder.finish
time.
The implementation performs the usage scope validation by composing
the set of all usage flags of each subresource used in the usage scope.
A GPUValidationError
is generated in the current scope with an appropriate error message
if that union contains a mutating usage combined with any other usage.
5. Core Internal Objects
5.1. Adapters
An adapter represents an implementation of WebGPU on the system. Each adapter identifies both an instance of a hardware accelerator (e.g. GPU or CPU) and an instance of a browser’s implementation of WebGPU on top of that accelerator.
If an adapter becomes unavailable, it becomes invalid. Once invalid, it never becomes valid again. Any devices on the adapter, and internal objects owned by those devices, also become invalid.
Note: An adapter may be a physical display adapter (GPU), but it could also be
a software renderer.
A returned adapter could refer to different physical adapters, or to
different browser codepaths or system drivers on the same physical adapters.
Applications can hold onto multiple adapters at once (via GPUAdapter
)
(even if some are invalid),
and two of these could refer to different instances of the same physical
configuration (e.g. if the GPU was reset or disconnected and reconnected).
An adapter has the following internal slots:
[[extensions]]
, of type sequence<GPUExtensionName
>, readonly-
The extensions which can be used to create devices on this adapter.
[[limits]]
, of typeGPULimits
, readonly-
The best limits which can be used to create devices on this adapter.
Each adapter limit must be the same or better than its default value in
GPULimits
.
Adapters are exposed via GPUAdapter
.
5.2. Devices
A device is the logical instantiation of an adapter, through which internal objects are created. It can be shared across multiple agents (e.g. dedicated workers).
A device is the exclusive owner of all internal objects created from it:
when the device is lost, it and all objects created on it (directly, e.g. createTexture()
, or indirectly, e.g. createView()
) become invalid.
A device has the following internal slots:
[[adapter]]
, of type adapter, readonly-
The adapter from which this device was created.
[[extensions]]
, of type sequence<GPUExtensionName
>, readonly-
The extensions which can be used on this device. No additional extensions can be used, even if the underlying adapter can support them.
[[limits]]
, of typeGPULimits
, readonly-
The limits which can be used on this device. No better limits can be used, even if the underlying adapter can support them.
GPUDeviceDescriptor
descriptor:
-
Set device.
[[adapter]]
to adapter. -
Set device.
[[extensions]]
to descriptor.extensions
. -
Set device.
[[limits]]
to descriptor.limits
.
Devices are exposed via GPUDevice
.
6. Initialization
6.1. Examples
Need a robust example like the one in ErrorHandling.md, which handles all situations. Possibly also include a simple example with no handling.
6.2. navigator.gpu
A GPU
object is available via navigator.gpu
on the Window
:
[Exposed =Window ]partial interface Navigator { [SameObject ]readonly attribute GPU ; };
gpu
... as well as on dedicated workers:
[Exposed =DedicatedWorker ]partial interface WorkerNavigator { [SameObject ]readonly attribute GPU ; };
gpu
6.3. GPU
GPU
is the entry point to WebGPU.
[Exposed =(Window ,DedicatedWorker )]interface GPU {Promise <GPUAdapter >requestAdapter (optional GPURequestAdapterOptions = {}); };
options
GPU
has the methods defined by the following sections.
6.3.1. requestAdapter(options)
-
optional
GPURequestAdapterOptions
options = {}
Returns: promise, of type Promise<GPUAdapter
>.
Requests an adapter from the user agent. The user agent chooses whether to return an adapter, and, if so, chooses according to the provided options.
Returns a new promise, promise. On the Device timeline, the following steps occur:
-
If the user agent chooses to return an adapter:
-
The user agent chooses an adapter adapter according to the rules in § 6.3.1.1 Adapter Selection.
-
promise resolves with a new
GPUAdapter
encapsulating adapter.
-
-
Otherwise, promise rejects with an
OperationError
.
6.3.1.1. Adapter Selection
GPURequestAdapterOptions
provides hints to the user agent indicating what
configuration is suitable for the application.
dictionary GPURequestAdapterOptions {GPUPowerPreference powerPreference ; };
enum {
GPUPowerPreference "low-power" ,"high-performance" };
GPURequestAdapterOptions
has the following members:
powerPreference
, of type GPUPowerPreference-
Optionally provides a hint indicating what class of adapter should be selected from the system’s available adapters.
The value of this hint may influence which adapter is chosen, but it must not influence whether an adapter is returned or not.
Note: The primary utility of this hint is to influence which GPU is used in a multi-GPU system. For instance, some laptops have a low-power integrated GPU and a high-performance discrete GPU.
Note: Depending on the exact hardware configuration, such as battery status and attached displays or removable GPUs, the user agent may select different adapters given the same power preference. Typically, given the same hardware configuration and state and
powerPreference
, the user agent is likely to select the same adapter.It must be one of the following values:
undefined
(or not present)-
Provides no hint to the user agent.
"low-power"
-
Indicates a request to prioritize power savings over performance.
Note: Generally, content should use this if it is unlikely to be constrained by drawing performance; for example, if it renders only one frame per second, draws only relatively simple geometry with simple shaders, or uses a small HTML canvas element. Developers are encouraged to use this value if their content allows, since it may significantly improve battery life on portable devices.
"high-performance"
-
Indicates a request to prioritize performance over power consumption.
Note: By choosing this value, developers should be aware that, for devices created on the resulting adapter, user agents are more likely to force device loss, in order to save power by switching to a lower-power adapter. Developers are encouraged to only specify this value if they believe it is absolutely necessary, since it may significantly decrease battery life on portable devices.
6.4. GPUAdapter
A GPUAdapter
encapsulates an adapter,
and describes its capabilities (extensions and limits).
To get a GPUAdapter
, use requestAdapter()
.
interface GPUAdapter {readonly attribute DOMString name ;readonly attribute FrozenArray <GPUExtensionName >extensions ; //readonly attribute GPULimits limits; Don’t expose higher limits for now.Promise <GPUDevice >requestDevice (optional GPUDeviceDescriptor = {}); };
descriptor
GPUAdapter
has:
-
These attributes:
name
, of type DOMString, readonly-
A human-readable name identifying the adapter. The contents are implementation-defined.
extensions
, of type FrozenArray<GPUExtensionName>, readonly-
Accessor for
this
.[[adapter]]
.[[extensions]]
.
-
These internal slots:
[[adapter]]
, of type adapter, readonly-
An internal slot holding the adapter to which this
GPUAdapter
refers.
-
The methods defined by the following sub-sections.
6.4.1. requestDevice(optional descriptor)
GPUAdapter
.
Arguments:
-
optional
GPUDeviceDescriptor
descriptor = {}
Returns: promise, of type Promise<GPUDevice
>.
Requests a device from the adapter.
Returns a new promise, promise. On the Device timeline, the following steps occur:
-
If the user agent can fulfill the request and the Valid Usage rules are met:
-
promise resolves to a new
GPUDevice
object encapsulating a new device with the capabilities described by descriptor.
-
-
Otherwise, promise rejects with an
OperationError
.
Let adapter be this.[[adapter]]
.
-
The set of
GPUExtensionName
values in descriptor.extensions
must be a subset of those in adapter.[[extensions]]
. -
For each type of limit in
GPULimits
, the value of that limit in descriptor.limits
must be no better than the value of that limit in adapter.[[limits]]
.
6.4.1.1. GPUDeviceDescriptor
GPUDeviceDescriptor
describes a device request.
dictionary GPUDeviceDescriptor :GPUObjectDescriptorBase {sequence <GPUExtensionName >extensions = [];GPULimits limits = {}; };
extensions
, of type sequence<GPUExtensionName>, defaulting to[]
-
The set of
GPUExtensionName
values in this sequence defines the exact set of extensions that must be enabled on the device. limits
, of type GPULimits, defaulting to{}
-
Defines the exact limits that must be enabled on the device.
6.4.1.2. GPUExtensionName
Each GPUExtensionName
identifies a set of functionality which, if available,
allows additional usages of WebGPU that would have otherwise been invalid.
enum GPUExtensionName {"texture-compression-bc" ,};
"pipeline-statistics-query"
6.4.1.3. GPULimits
GPULimits
describes various limits in the usage of WebGPU on a device.
One limit value may be better than another. For each limit, "better" is defined.
Note: Setting "better" limits may not necessarily be desirable. While they enable strictly more programs to be valid, they may have a performance impact. Because of this, and to improve portability across devices and implementations, applications should generally request the "worst" limits that work for their content.
dictionary GPULimits {GPUSize32 maxBindGroups = 4;GPUSize32 maxDynamicUniformBuffersPerPipelineLayout = 8;GPUSize32 maxDynamicStorageBuffersPerPipelineLayout = 4;GPUSize32 maxSampledTexturesPerShaderStage = 16;GPUSize32 maxSamplersPerShaderStage = 16;GPUSize32 maxStorageBuffersPerShaderStage = 4;GPUSize32 maxStorageTexturesPerShaderStage = 4;GPUSize32 maxUniformBuffersPerShaderStage = 12;GPUSize32 maxTextureSize = 8192;GPUSize32 maxTextureLayers = 256; };
maxBindGroups
, of type GPUSize32, defaulting to4
-
The maximum number of
GPUBindGroupLayouts
allowed inbindGroupLayouts
when creating aGPUPipelineLayout
.Higher is better.
maxDynamicUniformBuffersPerPipelineLayout
, of type GPUSize32, defaulting to8
-
The maximum number of
entries
for which:-
type
is"uniform-buffer"
, and -
hasDynamicOffset
is true,
across all
bindGroupLayouts
when creating aGPUPipelineLayout
.Higher is better.
-
maxDynamicStorageBuffersPerPipelineLayout
, of type GPUSize32, defaulting to4
-
The maximum number of
entries
for which:-
type
is"storage-buffer"
, and -
hasDynamicOffset
is true,
across all
bindGroupLayouts
when creating aGPUPipelineLayout
.Higher is better.
-
maxSampledTexturesPerShaderStage
, of type GPUSize32, defaulting to16
-
For each possible
GPUShaderStage
stage
, the maximum number ofentries
for which:-
type
is"sampled-texture"
, and -
visibility
includesstage
,
across all
bindGroupLayouts
when creating aGPUPipelineLayout
.Higher is better.
-
maxSamplersPerShaderStage
, of type GPUSize32, defaulting to16
-
For each possible
GPUShaderStage
stage
, the maximum number ofentries
for which:-
type
is"sampler"
or"comparison-sampler"
, and -
visibility
includesstage
,
across all
bindGroupLayouts
when creating aGPUPipelineLayout
.Higher is better.
-
maxStorageBuffersPerShaderStage
, of type GPUSize32, defaulting to4
-
For each possible
GPUShaderStage
stage
, the maximum number ofentries
for which:-
type
is"storage-buffer"
, and -
visibility
includesstage
,
across all
bindGroupLayouts
when creating aGPUPipelineLayout
.Higher is better.
-
maxStorageTexturesPerShaderStage
, of type GPUSize32, defaulting to4
-
For each possible
GPUShaderStage
stage
, the maximum number ofentries
for which:-
type
is"readonly-storage-texture"
or"writeonly-storage-texture"
, and -
visibility
includesstage
,
across all
bindGroupLayouts
when creating aGPUPipelineLayout
.Higher is better.
-
maxUniformBuffersPerShaderStage
, of type GPUSize32, defaulting to12
-
For each possible
GPUShaderStage
stage
, the maximum number ofentries
for which:-
type
isuniform-buffer
, and -
visibility
includesstage
,
across all
bindGroupLayouts
when creating aGPUPipelineLayout
.Higher is better.
-
maxTextureSize
, of type GPUSize32, defaulting to8192
-
The maximum size in a single dimension of a texture.
Note: This is only used in the width dimension for 1D textures, and the width & height dimensions for 2D textures. This isn’t used at all for 3D textures.
Higher is better.
maxTextureLayers
, of type GPUSize32, defaulting to256
-
The maximum number of layers in an array texture.
Note: This is also used for the size limit of 3D textures.
Higher is better.
6.5. GPUDevice
A GPUDevice
encapsulates a device and exposes
the functionality of that device.
GPUDevice
is the top-level interface through which WebGPU interfaces are created.
To get a GPUDevice
, use requestDevice()
.
[Exposed =(Window ,DedicatedWorker ),Serializable ]interface GPUDevice :EventTarget { [SameObject ]readonly attribute GPUAdapter adapter ;readonly attribute FrozenArray <GPUExtensionName >extensions ;readonly attribute object limits ; [SameObject ]readonly attribute GPUQueue ;
defaultQueue GPUBuffer createBuffer (GPUBufferDescriptor );
descriptor GPUMappedBuffer (
createBufferMapped GPUBufferDescriptor );
descriptor GPUTexture createTexture (GPUTextureDescriptor );
descriptor GPUSampler createSampler (optional GPUSamplerDescriptor = {});
descriptor GPUBindGroupLayout createBindGroupLayout (GPUBindGroupLayoutDescriptor );
descriptor GPUPipelineLayout createPipelineLayout (GPUPipelineLayoutDescriptor );
descriptor GPUBindGroup createBindGroup (GPUBindGroupDescriptor );
descriptor GPUShaderModule (
createShaderModule GPUShaderModuleDescriptor );
descriptor GPUComputePipeline createComputePipeline (GPUComputePipelineDescriptor );
descriptor GPURenderPipeline createRenderPipeline (GPURenderPipelineDescriptor );
descriptor GPUCommandEncoder (
createCommandEncoder optional GPUCommandEncoderDescriptor = {});
descriptor GPURenderBundleEncoder (
createRenderBundleEncoder GPURenderBundleEncoderDescriptor );
descriptor GPUQuerySet (
createQuerySet GPUQuerySetDescriptor ); };
descriptor GPUDevice includes GPUObjectBase ;
GPUDevice
has:
-
These attributes:
adapter
, of type GPUAdapter, readonly-
The
GPUAdapter
from which this device was created. extensions
, of type FrozenArray<GPUExtensionName>, readonly-
A sequence containing the
GPUExtensionName
s of the extensions supported by the device (i.e. the ones with which it was created). limits
, of type object, readonly-
A
GPULimits
object exposing the limits supported by the device (i.e. the ones with which it was created).
-
These internal slots:
-
The methods listed in its WebIDL definition above, which are defined elsewhere in this document.
GPUDevice
objects are serializable objects.
-
If forStorage is true, throw a "
DataCloneError
". -
Set serialized.device to the value of value.
[[device]]
.
-
Set value.
[[device]]
to serialized.device.
7. Buffers
7.1. GPUBuffer
define buffer (internal object)
A GPUBuffer
represents a block of memory that can be used in GPU operations.
Data is stored in linear layout, meaning that each byte of the allocation can be
addressed by its offset from the start of the GPUBuffer
, subject to alignment
restrictions depending on the operation. Some GPUBuffers
can be
mapped which makes the block of memory accessible via an ArrayBuffer
called
its mapping.
GPUBuffers
are created via GPUDevice.createBuffer(descriptor)
that returns a new buffer in the mapped or unmapped state.
[Serializable ]interface GPUBuffer {Promise <void >mapAsync (optional GPUSize64 = 0,
offset optional GPUSize64 = 0);
size ArrayBuffer getMappedRange (optional GPUSize64 = 0,
offset optional GPUSize64 = 0);
size void unmap ();void destroy (); };GPUBuffer includes GPUObjectBase ;
GPUBuffer
has the following internal slots:
[[size]]
of typeGPUSize64
.-
The length of the
GPUBuffer
allocation in bytes. [[usage]]
of typeGPUBufferUsageFlags
.-
The allowed usages for this
GPUBuffer
. [[state]]
of type buffer state.-
The current state of the
GPUBuffer
. [[mapping]]
of typeArrayBuffer
orPromise
ornull
.-
The mapping for this
GPUBuffer
. TheArrayBuffer
isn’t directly accessible and is instead accessed through views into it, called the mapped ranges, that are stored in[[mapped_ranges]]
Specify
[[mapping]]
in term ofDataBlock
similarly toAllocateArrayBuffer
? <https://github.com/gpuweb/gpuweb/issues/605> [[mapping_range]]
of typesequence<Number>
ornull
.-
The range of this
GPUBuffer
that is mapped. [[mapped_ranges]]
of typesequence<ArrayBuffer>
ornull
.-
The
ArrayBuffer
s returned viagetMappedRange
to the application. They are tracked so they can be detached whenunmap
is called.
[[usage]]
is differently named from [[textureUsage]]
.
We should make it consistent.
Each GPUBuffer
has a current buffer state on the Content timeline which is one of the following:
-
"mapped" where the
GPUBuffer
is available for CPU operations on its content. -
"mapped at creation" where the
GPUBuffer
was just created and is available for CPU operations on its content. -
"mapping pending" where the
GPUBuffer
is being made available for CPU operations on its content. -
"unmapped" where the
GPUBuffer
is available for GPU operations. -
"destroyed" where the
GPUBuffer
is no longer available for any operations exceptdestroy
.
Note: [[size]]
and [[usage]]
are immutable once the GPUBuffer
has been created.
GPUBuffer
has a state machine with the following states.
([[mapping]]
, [[mapping_range]]
,
and [[mapped_ranges]]
are null when not specified.)
-
mapped or mapped at creation with an
ArrayBuffer
typed[[mapping]]
, a sequence of two numbers in[[mapping_range]]
and a sequence ofArrayBuffer
in[[mapped_ranges]]
-
mapping pending with a
Promise
typed[[mapping]]
.
GPUBuffer
is Serializable
. It is a reference to an internal buffer
object, and Serializable
means that the reference can be copied between
realms (threads/workers), allowing multiple realms to access it concurrently.
Since GPUBuffer
has internal state (mapped, destroyed), that state is
internally-synchronized - these state changes occur atomically across realms.
7.2. Buffer Creation
7.2.1. GPUBufferDescriptor
This specifies the options to use in creating a GPUBuffer
.
dictionary :
GPUBufferDescriptor GPUObjectDescriptorBase {required GPUSize64 ;
size required GPUBufferUsageFlags ;
usage boolean =
mappedAtCreation false ; };
7.3. Buffer Usage
typedef [EnforceRange ]unsigned long ;
GPUBufferUsageFlags interface {
GPUBufferUsage const GPUBufferUsageFlags = 0x0001;
MAP_READ const GPUBufferUsageFlags = 0x0002;
MAP_WRITE const GPUBufferUsageFlags = 0x0004;
COPY_SRC const GPUBufferUsageFlags = 0x0008;
COPY_DST const GPUBufferUsageFlags = 0x0010;
INDEX const GPUBufferUsageFlags = 0x0020;
VERTEX const GPUBufferUsageFlags = 0x0040;
UNIFORM const GPUBufferUsageFlags = 0x0080;
STORAGE const GPUBufferUsageFlags = 0x0100;
INDIRECT const GPUBufferUsageFlags = 0x0200; };
QUERY_RESOLVE
7.3.1. createBuffer(descriptor)
-
GPUBufferDescriptor
descriptor
Returns: GPUBuffer
-
If this call doesn’t follow the createBuffer Valid Usage:
-
Return an error buffer.
Explain that the resulting error buffer can still be mapped at creation. <https://github.com/gpuweb/gpuweb/issues/605>
-
-
Let b be a new
GPUBuffer
object. -
If descriptor.
mappedAtCreation
is true:-
Set b.
[[mapping]]
to a newArrayBuffer
of size b.[[size]]
. -
Set b.
[[mapping_range]]
to[0, descriptor.size]
. -
Set b.
[[mapped_ranges]]
to[]
. -
Set b.
[[state]]
to mapped at creation.
-
-
Else:
-
Set b.
[[mapping]]
tonull
. -
Set b.
[[mapping_range]]
tonull
. -
Set b.
[[mapped_ranges]]
tonull
.
-
-
Set each byte of b’s allocation to zero.
-
Return b.
Note: it is valid to set mappedAtCreation
to true without MAP_READ
or MAP_WRITE
in usage
. This can be used to set the buffer’s
initial data.
GPUDevice
this and a GPUBufferDescriptor
descriptor the following validation rules apply:
-
descriptor.
usage
must be a subset of this.[[allowed buffer usages]]. -
If descriptor.
usage
containsMAP_READ
then the only other usage it may contain isCOPY_DST
. -
If descriptor.
usage
containsMAP_WRITE
then the only other usage it may contain isCOPY_SRC
.
Explain what are a GPUDevice
's [[allowed buffer usages]]
<https://github.com/gpuweb/gpuweb/issues/605>
7.4. Buffer Destruction
An application that no longer requires a GPUBuffer
can choose to lose
access to it before garbage collection by calling destroy()
.
Note: This allows the user agent to reclaim the GPU memory associated with the GPUBuffer
once all previously submitted operations using it are complete.
7.4.1. destroy()
GPUBuffer
.
-
If the this.
[[state]]
is mapped or mapped at creation:-
Run the steps to unmap this
-
Handle error buffers once we have a description of the error monad.
7.5. Buffer Mapping
An application can request to map a GPUBuffer
so that they can access its
content via ArrayBuffer
s that represent part of the GPUBuffer
's
allocations. Mapping a GPUBuffer
is requested asynchronously with mapAsync
so that the user agent can ensure the GPU
finished using the GPUBuffer
before the application can access its content.
Once the GPUBuffer
is mapped the application can synchronously ask for access
to ranges of its content with getMappedRange
. A mapped GPUBuffer
cannot be used by the GPU and must be unmapped using unmap
before
work using it can be submitted to the Queue timeline.
Add client-side validation that a mapped buffer can
only be unmapped and destroyed on the worker on which it was mapped. Likewise getMappedRange
can only be called on that worker. <https://github.com/gpuweb/gpuweb/issues/605>
7.5.1. mapAsync(offset, size)
There is concern that it should be clearer at a mapAsync
call point if it is meant for reading or writing because the semantics are very different.
Alternatives suggested include splitting into mapReadAsync
vs. mapWriteAsync
, or
adding a GPUMapFlags
as an argument to the call that can later be used to extend the method. <https://github.com/gpuweb/gpuweb/issues/605>
this: of type GPUBuffer
.
Arguments:
Returns: Promise
Handle error buffers once we have a description of the error monad. <https://github.com/gpuweb/gpuweb/issues/605>
-
If size is 0 and offset is less than this.
[[size]]
:-
Set size to this.
[[size]]
- offset
-
-
If this call doesn’t follow mapAsync Valid Usage:
-
Record a validation error on the current scope.
-
Return a promise rejected with an
AbortError
on the Device timeline.
-
-
Let p be a new
Promise
. -
Set this.
[[mapping]]
to p. -
Set this.
[[state]]
to mapping pending. -
Enqueue an operation on the default queue’s Queue timeline that will execute the following:
-
If this.
[[state]]
is mapping pending:-
Let m be a new
ArrayBuffer
of size size. -
Set the content of m to the content of this’s allocation starting at offset offset and for size bytes.
-
Set this.
[[mapping]]
to m. -
Set this.
[[mapping_range]]
to[start, offset]
. -
Set this.
[[mapped_ranges]]
to[]
. -
Resolve p.
-
-
-
Return p.
7.5.2. getMappedRange(start, offset)
GPUBuffer
.
Arguments:
Returns: ArrayBuffer
-
If this call doesn’t follow the getMappedRange Valid Usage:
-
Throw an
OperationError
.
-
-
Let m be a new
ArrayBuffer
of size size pointing at the content of this.[[mapping]]
at offset offset - this.[[mapping_range]]
[0]. -
Append m to this.
[[mapped_ranges]]
. -
Return m.
Given a GPUBuffer
this, a GPUSize64
offset and a GPUSize64
size the following validation rules apply:
-
this.
[[state]]
must be mapped or mapped at creation. -
offset must be a multiple of 8.
-
size must be a multiple of 4.
-
offset must be greater than or equal to this.
[[mapping_range]]
[0]. -
offset + size must be less than or equal to this.
[[mapping_range]]
[0] + this.[[mapping_range]]
[1]. -
[offset, offset + size) must not overlap another range in this.
[[mapped_ranges]]
.
Note: It is valid to get mapped ranges of an error GPUBuffer
that is mapped at creation because
the Content timeline might not know it is an error GPUBuffer
.
7.5.3. unmap()
GPUBuffer
.
-
If this call doesn’t follow unmap Valid Usage:
-
Record a validation error on the current scope.
-
Return.
-
-
If this.
[[state]]
is mapping pending:-
Reject
[[mapping]]
with anOperationError
. -
Set this.
[[mapping]]
to null.
-
-
If this.
[[state]]
is mapped or mapped at creation:-
If one of the two following conditions holds:
-
Then:
-
Enqueue an operation on the default queue’s Queue timeline that updates the this.
[[mapping_range]]
of this’s allocation to the content of this.[[mapping]]
.
-
-
Detach each
ArrayBuffer
in this.[[mapped_ranges]]
from its content. -
Set this.
[[mapping]]
to null. -
Set this.
[[mapping_range]]
to null. -
Set this.
[[mapped_ranges]]
to null.
-
Note: When a MAP_READ
buffer (not currently mapped at creation) is unmapped,
any local modifications done by the application to the mapped ranges ArrayBuffer
are
discarded and will not affect the content of follow-up mappings.
Given a GPUBuffer
the following validation rules apply:
Note: It is valid to unmap an error GPUBuffer
that is mapped at creation because
the Content timeline might not know it is an error GPUBuffer
.
8. Textures and Texture Views
define texture (internal object)
define mipmap level, array layer, slice (concepts)
8.1. GPUTexture
GPUTextures
are created via GPUDevice.createTexture(descriptor)
that returns a new texture.
[Serializable ]interface GPUTexture {GPUTextureView createView (optional GPUTextureViewDescriptor = {});
descriptor void (); };
destroy GPUTexture includes GPUObjectBase ;
GPUTexture
has the following internal slots:
[[textureSize]]
of typeGPUExtent3D
.-
The size of the
GPUTexture
in texels in mipmap level 0. [[mipLevelCount]]
of typeGPUIntegerCoordinate
.-
The total number of the mipmap levels of the
GPUTexture
. [[sampleCount]]
of typeGPUSize32
.-
The number of samples in each texel of the
GPUTexture
. [[dimension]]
of typeGPUTextureDimension
.-
The dimension of the
GPUTexture
. [[format]]
of typeGPUTextureFormat
.-
The format of the
GPUTexture
. [[textureUsage]]
of typeGPUTextureUsageFlags
.-
The allowed usages for this
GPUTexture
.
8.1.1. Texture Creation
dictionary :
GPUTextureDescriptor GPUObjectDescriptorBase {required GPUExtent3D ;
size GPUIntegerCoordinate = 1;
mipLevelCount GPUSize32 = 1;
sampleCount GPUTextureDimension = "2d";
dimension required GPUTextureFormat ;
format required GPUTextureUsageFlags ; };
usage
enum {
GPUTextureDimension ,
"1d" ,
"2d" };
"3d"
typedef [EnforceRange ]unsigned long ;
GPUTextureUsageFlags interface {
GPUTextureUsage const GPUTextureUsageFlags = 0x01;
COPY_SRC const GPUTextureUsageFlags = 0x02;
COPY_DST const GPUTextureUsageFlags = 0x04;
SAMPLED const GPUTextureUsageFlags = 0x08;
STORAGE const GPUTextureUsageFlags = 0x10; };
OUTPUT_ATTACHMENT
8.1.2. createTexture(descriptor)
-
GPUTextureDescriptor
descriptor
Returns: GPUTexture
-
If device is lost, or if this call doesn’t follow the createTexture Valid Usage, return an error texture.
-
Let t be a new
GPUTexture
object. -
Set t.
[[textureSize]]
to descriptor.size
. -
Set t.
[[sampleCount]]
to descriptor.sampleCount
. -
Set t.
[[dimension]]
to descriptor.dimension
. -
Set t.
[[format]]
to descriptor.format
. -
Set t.
[[textureUsage]]
to descriptor.usage
. -
Return t.
-
Calculate the values of w, h, and d. If the dimension is "1d":
-
Let w = size.width.
-
Let h = 1.
-
Let d = 1.
-
Else if the dimension is "2d":
-
Let w = size.width.
-
Let h = size.height.
-
Let d = 1.
-
Else (the dimension is "3d"):
-
Let w = size.width.
-
Let h = size.height.
-
Let d = size.depth.
-
Let m = the maximum value of w, h, and d.
-
Return one plus the greatest integral value of x for which 2^x <= m.
GPUDevice
this and a GPUTextureDescriptor
descriptor the following validation rules apply:
-
descriptor.
mipLevelCount
must be nonzero. -
descriptor.
sampleCount
must be nonzero. -
If descriptor.
dimension
is "1d":-
descriptor.
size
.width must be less than or equal to the owningGPUDevice
'smaxTextureSize
. -
descriptor.
size
.height must be less than or equal to the owningGPUDevice
'smaxTextureLayers
. -
descriptor.
sampleCount
must be 1.
-
-
Else if descriptor.
dimension
is "2d":-
descriptor.
size
.width must be less than or equal to the owningGPUDevice
'smaxTextureSize
. -
descriptor.
size
.height must be less than or equal to the owningGPUDevice
'smaxTextureSize
. -
descriptor.
size
.depth must be less than or equal to the owningGPUDevice
'smaxTextureLayers
.
-
-
Else (descriptor.
dimension
is "3d"):-
descriptor.
size
.width must be less than or equal to the owningGPUDevice
'smaxTextureLayers
. -
descriptor.
size
.height must be less than or equal to the owningGPUDevice
'smaxTextureLayers
. -
descriptor.
size
.depth must be less than or equal to the owningGPUDevice
'smaxTextureLayers
. -
descriptor.
sampleCount
must be 1.
-
-
If descriptor.
sampleCount
> 1:-
descriptor.
mipLevelCount
must be 1. -
descriptor.
format
must not be a compressed format.
-
-
descriptor.
mipLevelCount
must be less than or equal to maximum mipLevel count(descriptor.dimension
, descriptor.size
). -
If descriptor.
format
is a depth or stencil format:-
descriptor.
dimension
must be "2d". -
descriptor.
sampleCount
must be 1.
-
-
descriptor.
usage
must be a combination ofGPUTextureUsage
values. -
descriptor.
sampleCount
must be either 1 or 4.
8.2. GPUTextureView
interface { };
GPUTextureView GPUTextureView includes GPUObjectBase ;
8.2.1. Texture View Creation
dictionary :
GPUTextureViewDescriptor GPUObjectDescriptorBase {GPUTextureFormat ;
format GPUTextureViewDimension ;
dimension GPUTextureAspect = "all";
aspect GPUIntegerCoordinate = 0;
baseMipLevel GPUIntegerCoordinate = 0;
mipLevelCount GPUIntegerCoordinate = 0;
baseArrayLayer GPUIntegerCoordinate = 0; };
arrayLayerCount
Make this a standalone algorithm used in the createView algorithm.
The references to GPUTextureDescriptor here should actually refer to internal slots of a texture internal object once we have one.
-
dimension
: If unspecified: -
mipLevelCount
: If 0, defaults to texture.mipLevelCount
−baseMipLevel
. -
arrayLayerCount
: If 0, defaults to texture.size
.depth −baseArrayLayer
.
enum {
GPUTextureViewDimension ,
"1d" ,
"2d" ,
"2d-array" ,
"cube" ,
"cube-array" };
"3d"
enum {
GPUTextureAspect ,
"all" ,
"stencil-only" };
"depth-only"
8.2.2. GPUTexture
.createView(descriptor)
GPUTexture
.
Arguments:
-
optional
GPUTextureViewDescriptor
descriptor
Returns: view, of type GPUTextureView
.
8.3. Texture Formats
The name of the format specifies the order of components, bits per component, and data type for the component.
-
r
,g
,b
,a
= red, green, blue, alpha -
unorm
= unsigned normalized -
snorm
= signed normalized -
uint
= unsigned int -
sint
= signed int -
float
= floating point
If the format has the -srgb
suffix, then sRGB conversions from gamma to linear
and vice versa are applied during the reading and writing of color values in the
shader. Compressed texture formats are provided by extensions. Their naming
should follow the convention here, with the texture name as a prefix. e.g. etc2-rgba8unorm
.
The texel block is a single addressable element of the textures in pixel-based GPUTextureFormat
s,
and a single compressed block of the textures in block-based compressed GPUTextureFormat
s.
The texel block width and texel block height specifies the dimension of one texel block.
-
For pixel-based
GPUTextureFormat
s, the texel block width and texel block height are always 1. -
For block-based compressed
GPUTextureFormat
s, the texel block width is the number of texels in each row of one texel block, and the texel block height is the number of texel rows in one texel block.
The texel block size of a GPUTextureFormat
is the number of bytes to store one texel block.
The texel block size of each GPUTextureFormat
is constant except for "depth24plus"
and "depth24plus-stencil8"
.
enum { // 8-bit formats
GPUTextureFormat ,
"r8unorm" ,
"r8snorm" ,
"r8uint" , // 16-bit formats
"r8sint" ,
"r16uint" ,
"r16sint" ,
"r16float" ,
"rg8unorm" ,
"rg8snorm" ,
"rg8uint" , // 32-bit formats
"rg8sint" ,
"r32uint" ,
"r32sint" ,
"r32float" ,
"rg16uint" ,
"rg16sint" ,
"rg16float" ,
"rgba8unorm" ,
"rgba8unorm-srgb" ,
"rgba8snorm" ,
"rgba8uint" ,
"rgba8sint" ,
"bgra8unorm" , // Packed 32-bit formats
"bgra8unorm-srgb" ,
"rgb10a2unorm" , // 64-bit formats
"rg11b10float" ,
"rg32uint" ,
"rg32sint" ,
"rg32float" ,
"rgba16uint" ,
"rgba16sint" , // 128-bit formats
"rgba16float" ,
"rgba32uint" ,
"rgba32sint" , // Depth and stencil formats
"rgba32float" ,
"depth32float" ,
"depth24plus" };
"depth24plus-stencil8"
The following texture formats are considered depth or stencil formats:
-
"depth32float"
-
"depth24plus"
-
"depth24plus-stencil8"
There are no compressed formats defined in unextended WebGPU. Extensions may define some, though.
The depth24plus
family of formats (depth24plus
and depth24plus-stencil8
)
must have a depth-component precision of
1 ULP ≤ 1 / (224).
Note: This is unlike the 24-bit unsigned normalized format family typically found in native APIs, which has a precision of 1 ULP = 1 / (224 − 1).
enum {
GPUTextureComponentType ,
"float" ,
"sint" };
"uint"
9. Samplers
9.1. GPUSampler
interface { };
GPUSampler GPUSampler includes GPUObjectBase ;
GPUSampler
has the following internal slots:
[[compareEnable]]
of typeboolean
.-
Whether the
GPUSampler
is used as a comparison sampler.
9.1.1. Creation
dictionary :
GPUSamplerDescriptor GPUObjectDescriptorBase {GPUAddressMode = "clamp-to-edge";
addressModeU GPUAddressMode = "clamp-to-edge";
addressModeV GPUAddressMode = "clamp-to-edge";
addressModeW GPUFilterMode = "nearest";
magFilter GPUFilterMode = "nearest";
minFilter GPUFilterMode = "nearest";
mipmapFilter float = 0;
lodMinClamp float = 0xffffffff; // TODO: What should this be? Was Number.MAX_VALUE.
lodMaxClamp GPUCompareFunction ; };
compare
9.1.2. GPUDevice
.createSampler(descriptor)
-
optional
GPUSamplerDescriptor
descriptor = {}
Returns: GPUSampler
-
Let s be a new
GPUSampler
object. -
Set the
[[compareEnable]]
slot of s to false if thecompare
attribute of descriptor is null or undefined. Otherwise, set it to true. -
Return s.
enum {
GPUAddressMode ,
"clamp-to-edge" ,
"repeat" };
"mirror-repeat"
enum {
GPUFilterMode ,
"nearest" };
"linear"
enum {
GPUCompareFunction ,
"never" ,
"less" ,
"equal" ,
"less-equal" ,
"greater" ,
"not-equal" ,
"greater-equal" };
"always"
10. Resource Binding
10.1. GPUBindGroupLayout
A GPUBindGroupLayout
defines the interface between a set of resources bound in a GPUBindGroup
and their accessibility in shader stages.
[Serializable ]interface { };
GPUBindGroupLayout GPUBindGroupLayout includes GPUObjectBase ;
10.1.1. Creation
A GPUBindGroupLayout
is created via GPUDevice.createBindGroupLayout()
.
dictionary :
GPUBindGroupLayoutDescriptor GPUObjectDescriptorBase {required sequence <GPUBindGroupLayoutEntry >; };
entries
A GPUBindGroupLayoutEntry
describes a single shader resource binding to be included in a GPUBindGroupLayout
.
dictionary {
GPUBindGroupLayoutEntry required GPUIndex32 ;
binding required GPUShaderStageFlags ;
visibility required GPUBindingType ;
type GPUTextureViewDimension = "2d";
viewDimension GPUTextureComponentType = "float";
textureComponentType GPUTextureFormat ;
storageTextureFormat boolean =
multisampled false ;boolean =
hasDynamicOffset false ; };
-
binding
: A unique identifier for a resource binding within aGPUBindGroupLayoutEntry
, a correspondingGPUBindGroupEntry
, and shader stages. -
visibility
: A bitset of the members ofGPUShaderStage
. Each set bit indicates that aGPUBindGroupLayoutEntry
's resource will be accessible from the associated shader stage.
typedef [EnforceRange ]unsigned long ;
GPUShaderStageFlags interface {
GPUShaderStage const GPUShaderStageFlags = 0x1;
VERTEX const GPUShaderStageFlags = 0x2;
FRAGMENT const GPUShaderStageFlags = 0x4; };
COMPUTE
-
type
: A member ofGPUBindingType
that indicates the intended usage of a resource binding in its visibleGPUShaderStage
s.
enum {
GPUBindingType ,
"uniform-buffer" ,
"storage-buffer" ,
"readonly-storage-buffer" ,
"sampler" ,
"comparison-sampler" ,
"sampled-texture" ,
"readonly-storage-texture" // TODO: other binding types };
"writeonly-storage-texture"
-
viewDimension
,multisampled
: Describes the dimensionality of texture view bindings, and indicates if they are multisampled.Note: This allows Metal-based implementations to back the respective bind groups with
MTLArgumentBuffer
objects that are more efficient to bind at run-time. -
hasDynamicOffset
: Foruniform-buffer
,storage-buffer
, andreadonly-storage-buffer
bindings, indicates that the binding has a dynamic offset. One offset must be passed to setBindGroup for each dynamic binding in increasing order ofbinding
number.
A GPUBindGroupLayout
object has the following internal slots:
-
[[entryMap]]
of type map. -
The map of binding indices pointing to the
GPUBindGroupLayoutEntry
s, which thisGPUBindGroupLayout
describes.
10.1.2. GPUDevice.createBindGroupLayout(GPUBindGroupLayoutDescriptor)
GPUDevice
.
Arguments:
-
GPUBindGroupLayoutDescriptor
descriptor
Returns: GPUBindGroupLayout
.
The createBindGroupLayout(descriptor)
method is used to create GPUBindGroupLayout
s.
-
Ensure bind group layout device validation is not violated.
-
Let layout be a new valid
GPUBindGroupLayout
object. -
For each
GPUBindGroupLayoutEntry
bindingDescriptor in descriptor.entries
:-
Ensure bindingDescriptor.
binding
does not violate binding validation. -
If bindingDescriptor.
visibility
includesVERTEX
, ensure vertex shader binding validation is not violated. -
If bindingDescriptor.
type
isuniform-buffer
:-
Ensure uniform buffer validation is not violated.
-
If bindingDescriptor.
hasDynamicOffset
istrue
, ensure dynamic uniform buffer validation is not violated.
-
-
If bindingDescriptor.
type
isstorage-buffer
orreadonly-storage-buffer
:-
Ensure storage buffer validation is not violated.
-
If bindingDescriptor.
hasDynamicOffset
istrue
, ensure dynamic storage buffer validation is not violated.
-
-
If bindingDescriptor.
type
issampled-texture
, ensure sampled texture validation is not violated. -
If bindingDescriptor.
type
isreadonly-storage-texture
orwriteonly-storage-texture
, ensure storage texture validation is not violated. -
If bindingDescriptor.
type
issampler
, ensure sampler validation is not violated. -
Insert bindingDescriptor into layout.
[[entryMap]]
with the key of bindingDescriptor/binding
.
-
-
Return layout.
If any of the following conditions are violated:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Create a new invalid
GPUBindGroupLayout
and return the result.
bind group layout device validation: The GPUDevice
must not be lost.
binding validation: Each bindingDescriptor.binding
in descriptor must be unique.
vertex shader binding validation: storage-buffer
is not allowed.
uniform buffer validation: There must be GPULimits.maxUniformBuffersPerShaderStage
or
fewer bindingDescriptors of type uniform-buffer
visible on each shader stage in descriptor.
dynamic uniform buffer validation: There must be GPULimits.maxDynamicUniformBuffersPerPipelineLayout
or
fewer bindingDescriptors of type uniform-buffer
with hasDynamicOffset
set to true
in descriptor that are visible to any shader stage.
storage buffer validation: There must be GPULimits.maxStorageBuffersPerShaderStage
or
fewer bindingDescriptors of type storage-buffer
visible on each shader stage in descriptor.
dynamic storage buffer validation: There must be GPULimits.maxDynamicStorageBuffersPerPipelineLayout
or
fewer bindingDescriptors of type storage-buffer
with hasDynamicOffset
set to true
in descriptor that are visible to any shader stage.
sampled texture validation: There must be GPULimits.maxSampledTexturesPerShaderStage
or
fewer bindingDescriptors of type sampled-texture
visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset
must be false
.
storage texture validation: There must be GPULimits.maxStorageTexturesPerShaderStage
or
fewer bindingDescriptors of type readonly-storage-texture
and writeonly-storage-texture
visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset
must be false
.
sampler validation: There must be GPULimits.maxSamplersPerShaderStage
or
fewer bindingDescriptors of type sampler
visible on each shader stage in descriptor. bindingDescriptor.hasDynamicOffset
must be false
.
10.1.3. Compatibility
GPUBindGroupLayout
objects a and b are considered group-equivalent if and only if, for any binding number binding, one of the following is true:
-
it’s missing from both a.
[[entryMap]]
and b.[[entryMap]]
. -
a.
[[entryMap]]
[binding] is entry-equivalent to b.[[entryMap]]
[binding]
GPUBindGroupLayoutEntry
entries a and b are considered entry-equivalent if all of the conditions are true:
-
a.
visibility
== b.visibility
-
if a.
type
is"uniform-buffer"
,"storage-buffer"
, or"readonly-storage-buffer"
, then:-
a.
hasDynamicOffset
== b.hasDynamicOffset
-
-
if a.
type
is"sampled-texture"
, then:-
a.
viewDimension
== b.viewDimension
-
a.
multisampled
== b.multisampled
-
-
if a.
type
is"readonly-storage-texture"
or"writeonly-storage-texture"
, then:-
a.
viewDimension
== b.viewDimension
-
If bind groups layouts are group-equivalent they can be interchangeably used in all contents.
10.2. GPUBindGroup
A GPUBindGroup
defines a set of resources to be bound together in a group
and how the resources are used in shader stages.
interface { };
GPUBindGroup GPUBindGroup includes GPUObjectBase ;
10.2.1. Bind Group Creation
A GPUBindGroup
is created via GPUDevice.createBindGroup()
.
dictionary :
GPUBindGroupDescriptor GPUObjectDescriptorBase {required GPUBindGroupLayout ;
layout required sequence <GPUBindGroupEntry >; };
entries
A GPUBindGroupEntry
describes a single resource to be bound in a GPUBindGroup
.
typedef (GPUSampler or GPUTextureView or GPUBufferBinding );
GPUBindingResource dictionary {
GPUBindGroupEntry required GPUIndex32 ;
binding required GPUBindingResource ; };
resource
dictionary {
GPUBufferBinding required GPUBuffer ;
buffer GPUSize64 = 0;
offset GPUSize64 ; };
size
A GPUBindGroup
object has the following internal slots:
[[layout]]
of typeGPUBindGroupLayout
.-
The
GPUBindGroupLayout
associated with thisGPUBindGroup
. [[entries]]
of type sequence<GPUBindGroupEntry
>.-
The set of
GPUBindGroupEntry
s thisGPUBindGroup
describes. [[usedBuffers]]
of type maplike<GPUBuffer
,GPUBufferUsage
>.-
The set of buffers used by this bind group and the corresponding usage flags.
[[usedTextures]]
of type maplike<GPUTexture
subresource,GPUTextureUsage
>.-
The set of texure subresources used by this bind group. Each subresource is stored with the union of usage flags that apply to it.
10.2.2. GPUDevice.createBindGroup(GPUBindGroupDescriptor)
-
GPUBindGroupDescriptor
descriptor
Returns: GPUBindGroup
.
The createBindGroup(descriptor)
method is used to create GPUBindGroup
s.
If any of the conditions below are violated:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Create a new invalid
GPUBindGroup
and return the result.
-
Ensure bind group device validation is not violated.
-
Ensure descriptor.
layout
is a validGPUBindGroupLayout
. -
Ensure the number of
entries
of descriptor.layout
exactly equals to the number of descriptor.entries
. -
For each
GPUBindGroupEntry
bindingDescriptor in descriptor.entries
:-
Ensure there is exactly one
GPUBindGroupLayoutEntry
layoutBinding inentries
of descriptor.layout
such that layoutBinding.binding
equals to bindingDescriptor.binding
. -
If layoutBinding.
type
is"sampler"
:-
Ensure bindingDescriptor.
resource
is a validGPUSampler
object and[[compareEnable]]
is false.
-
-
If layoutBinding.
type
is"comparison-sampler"
:-
Ensure bindingDescriptor.
resource
is a validGPUSampler
object and[[compareEnable]]
is true.
-
-
If layoutBinding.
type
is"sampled-texture"
or"readonly-storage-texture"
or"writeonly-storage-texture"
.-
Ensure bindingDescriptor.
resource
is a validGPUTextureView
object. -
Ensure texture view binding validation is not violated.
-
Ensure bindingDescriptor.
storageTextureFormat
is a validGPUTextureFormat
.
-
-
If layoutBinding.
type
is"uniform-buffer"
or"storage-buffer"
or"readonly-storage-buffer"
.-
Ensure bindingDescriptor.
resource
is a validGPUBufferBinding
object. -
Ensure buffer binding validation is not violated.
-
-
-
Return a new
GPUBindGroup
object with:-
[[layout]]
= descriptor.layout
-
[[entries]]
= descriptor.entries
-
[[usedBuffers]]
= union of the buffer usages across all entries -
[[usedTextures]]
= union of the texture subresource usages across all entries
-
bind group device validation: The GPUDevice
must not be lost.
texture view binding validation: Let view be bindingDescriptor.resource
, a GPUTextureView
.
This layoutBinding must be compatible with this view. This requires:
-
Its layoutBinding.
viewDimension
must equal view’sdimension
. -
Its layoutBinding.
textureComponentType
must be compatible with view’sformat
. -
If layoutBinding.
multisampled
istrue
, view’s texture’ssampleCount
must be greater than 1. Otherwise, if bindingDescriptor.multisampled
isfalse
, view’s texture’ssampleCount
must be 1. -
If layoutBinding.
type
is"sampled-texture"
, view’s texture’susage
must includeSAMPLED
. Each texture subresource seen by view is added to[[usedTextures]]
withSAMPLED
flag. -
If layoutBinding.
type
is"readonly-storage-texture"
or"writeonly-storage-texture"
, view’s texture’susage
must includeSTORAGE
. Each texture subresource seen by view is added to[[usedTextures]]
withSTORAGE
flag.
buffer binding validation: Let bufferBinding be bindingDescriptor.resource
, a GPUBufferBinding
.
This layoutBinding must be compatible with this bufferBinding. This requires:
-
If layoutBinding.
type
is"uniform-buffer"
, the bufferBinding.buffer
'susage
must includeUNIFORM
. The buffer is added to the[[usedBuffers]]
map withUNIFORM
flag. -
If layoutBinding.
type
is"storage-buffer"
or"readonly-storage-buffer"
, the bufferBinding.buffer
'susage
must includeSTORAGE
. The buffer is added to the[[usedBuffers]]
map withSTORAGE
flag. -
The bound part designated by bufferBinding.
offset
and bufferBinding.size
must reside inside the buffer.
10.3. GPUPipelineLayout
A GPUPipelineLayout
defines the mapping between resources of all GPUBindGroup
objects set up during command encoding in setBindGroup
, and the shaders of the pipeline set by GPURenderEncoderBase.setPipeline
or GPUComputePassEncoder.setPipeline
.
The full binding address of a resource can be defined as a trio of:
-
shader stage mask, to which the resource is visible
-
bind group index
-
binding number
The components of this address can also be seen as the binding space of a pipeline. A GPUBindGroup
(with the corresponding GPUBindGroupLayout
) covers that space for a fixed bind group index. The contained bindings need to be a superset of the resources used by the shader at this bind group index.
[Serializable ]interface { };
GPUPipelineLayout GPUPipelineLayout includes GPUObjectBase ;
GPUPipelineLayout
has the following internal slots:
[[bindGroupLayouts]]
of type sequence<GPUBindGroupLayout
>.-
The
GPUBindGroupLayout
objects provided at creation inGPUPipelineLayoutDescriptor.bindGroupLayouts
.
Note: using the same GPUPipelineLayout
for many GPURenderPipeline
or GPUComputePipeline
pipelines guarantees that the user agent doesn’t need to rebind any resources internally when there is a switch between these pipelines.
GPUComputePipeline
object X was created with GPUPipelineLayout.bindGroupLayouts
A, B, C. GPUComputePipeline
object Y was created with GPUPipelineLayout.bindGroupLayouts
A, D, C. Supposing the command encoding sequence has two dispatches:
In this scenario, the user agent would have to re-bind the group slot 2 for the second dispatch, even though neither the GPUBindGroupLayout
at index 2 of GPUPipelineLayout.bindGrouplayouts
, or the GPUBindGroup
at slot 2, change.
should this example and the note be moved to some "best practices" document?
Note: the expected usage of the GPUPipelineLayout
is placing the most common and the least frequently changing bind groups at the "bottom" of the layout, meaning lower bind group slot numbers, like 0 or 1. The more frequently a bind group needs to change between draw calls, the higher its index should be. This general guideline allows the user agent to minimize state changes between draw calls, and consequently lower the CPU overhead.
10.3.1. Creation
A GPUPipelineLayout
is created via GPUDevice.createPipelineLayout()
.
dictionary :
GPUPipelineLayoutDescriptor GPUObjectDescriptorBase {required sequence <GPUBindGroupLayout >; };
bindGroupLayouts
10.3.2. GPUDevice
.createPipelineLayout(descriptor)
-
GPUPipelineLayoutDescriptor
descriptor
Returns: GPUPipelineLayout
.
-
Ensure pipeline layout device validation is not violated.
-
Ensure pipeline layout entries validation is not violated.
-
Let pl be a new
GPUPipelineLayout
object. -
Set the pl.
[[bindGroupLayouts]]
to descriptor.bindGroupLayouts
. -
Return pl.
If any of the following conditions are violated:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Create a new invalid
GPUPipelineLayout
and return the result.
pipeline layout device validation: The GPUDevice
must not be lost.
pipeline layout entries validation:
There must be GPULimits.maxBindGroups
or fewer
elements in descriptor.bindGroupLayouts
.
All these GPUBindGroupLayout
entries have to be valid.
there will be more limits applicable to the whole pipeline layout.
Note: two GPUPipelineLayout
objects are considered equivalent for any usage
if their internal [[bindGroupLayouts]]
sequences contain GPUBindGroupLayout
objects that are group-equivalent.
11. Shader Modules
11.1. GPUShaderModule
enum {
GPUCompilationMessageType ,
"error" ,
"warning" }; [
"info" Serializable ]interface {
GPUCompilationMessage readonly attribute DOMString ;
message readonly attribute GPUCompilationMessageType ;
type readonly attribute unsigned long long ;
lineNum readonly attribute unsigned long long ; }; [
linePos Serializable ]interface {
GPUCompilationInfo readonly attribute sequence <GPUCompilationMessage >; }; [
messages Serializable ]interface {
GPUShaderModule Promise <GPUCompilationInfo >(); };
compilationInfo GPUShaderModule includes GPUObjectBase ;
GPUShaderModule
is Serializable
. It is a reference to an internal
shader module object, and Serializable
means that the reference can be copied between realms (threads/workers), allowing multiple realms to access
it concurrently. Since GPUShaderModule
is immutable, there are no race
conditions.
11.1.1. Shader Module Creation
dictionary :
GPUShaderModuleDescriptor GPUObjectDescriptorBase {required USVString ;
code object ; };
sourceMap
sourceMap
, if defined, MAY be interpreted as a
source-map-v3 format. (https://sourcemaps.info/spec.html)
Source maps are optional, but serve as a standardized way to support dev-tool
integration such as source-language debugging.
12. Pipelines
A pipeline, be it GPUComputePipeline
or GPURenderPipeline
,
represents the complete function done by a combination of the GPU hardware, the driver,
and the user agent, that process the input data in the shape of bindings and vertex buffers,
and produces some output, like the colors in the output render targets.
Structurally, the pipeline consists of a sequence of programmable stages (shaders) and fixed-function states, such as the blending modes.
Note: Internally, depending on the target platform, the driver may convert some of the fixed-function states into shader code, and link it together with the shaders provided by the user. This linking is one of the reason the object is created as a whole.
This combination state is created as a single object
(by GPUDevice.createComputePipeline()
or GPUDevice.createRenderPipeline()
),
and switched as one
(by GPUComputePassEncoder.setPipeline
or GPURenderEncoderBase.setPipeline
correspondingly).
12.1. Base pipelines
dictionary :
GPUPipelineDescriptorBase GPUObjectDescriptorBase {GPUPipelineLayout ; };
layout interface mixin {
GPUPipelineBase GPUBindGroupLayout getBindGroupLayout (unsigned long ); };
index
GPUPipelineBase
has the following internal slots:
[[layout]]
of typeGPUPipelineLayout
.-
The definition of the layout of resources which can be used with
this
.
12.1.1. getBindGroupLayout(index)
-
unsigned long
index
Returns: GPUBindGroupLayout
-
If index is greater or equal to
maxBindGroups
:-
Throw a
RangeError
.
-
-
If this is not valid:
-
Return a new error
GPUBindGroupLayout
.
-
-
Return a new
GPUBindGroupLayout
object that references the same internal object as this.[[layout]]
.[[bindGroupLayouts]]
[index].
Specify this more properly once we have internal objects for GPUBindGroupLayout
.
Alternatively only spec is as a new internal objects that’s group-equivalent
Note: Only returning new GPUBindGroupLayout
objects ensures no synchronization is necessary
between the Content timeline and the Device timeline.
12.1.2. Default pipeline layout
A GPUPipelineBase
object that was created without a layout
has a default layout created and used instead.
-
Let groupDescs be a sequence of device.
[[limits]]
.maxBindGroups
newGPUBindGroupLayoutDescriptor
objects. -
For each groupDesc in groupDescs:
-
Set groupDesc.
entries
to an empty sequence.
-
-
For each
GPUProgrammableStageDescriptor
stageDesc in the descriptor used to create the pipeline:-
Let stageInfo be the "reflection information" for stageDesc.
Define the reflection information concept so that this spec can interface with the WGSL spec and get information what the interface is for a
GPUShaderModule
for a specific entrypoint. -
Let shaderStage be the
GPUShaderStageFlags
for stageDesc.entryPoint
in stageDesc.module
. -
For each resource resource in stageInfo’s resource interface:
-
Let group be resource’s "group" decoration.
-
Let binding be resource’s "binding" decoration.
-
Let entry be a new
GPUBindGroupLayoutEntry
. -
Set entry.
binding
to binding. -
Set entry.
visibility
to shaderStage. -
If resource is for a sampler binding:
-
If resource is for a comparison sampler binding:
-
Set entry.
type
tocomparison-sampler
.
-
-
If resource is for a buffer binding:
-
Set entry.
hasDynamicOffset
to false. -
If resource is for a uniform buffer:
-
Set entry.
type
touniform-buffer
.
-
-
If resource is for a read-only storage buffer:
-
Set entry.
type
toreadonly-storage-buffer
.
-
-
If resource is for a storage buffer:
-
Set entry.
type
tostorage-buffer
.
-
-
-
If resource is for a texture binding:
-
Set entry.
textureComponentType
to resource’s component type. -
Set entry.
viewDimension
to resource’s dimension. -
If resource is multisampled:
-
Set entry.
multisampled
to true.
-
-
If resource is for a sampled texture:
-
Set entry.
type
tosampled-texture
.
-
-
If resource is for a read-only storage texture:
-
Set entry.
type
toreadonly-storage-texture
. -
Set entry.
storageTextureFormat
to resource’s format.
-
-
If resource is for a write-only storage texture:
-
Set entry.
type
towriteonly-storage-texture
. -
Set entry.
storageTextureFormat
to resource’s format.
-
-
-
If groupDescs[group] has an entry previousEntry with
binding
equal to binding:-
If previousEntry is equal to entry up to
visibility
:-
Add the bits set in entry.
visibility
into previousEntry.visibility
-
-
Else
-
Return null (which will cause the creation of the pipeline to fail).
-
-
-
Else
-
Append entry to groupDescs[group].
-
-
-
-
Let groupLayouts be a new sequence.
-
For each groupDesc in groupDescs:
-
Append device.
createBindGroupLayout()
(groupDesc) to groupLayouts.
-
-
Let desc be a new
GPUPipelineLayoutDescriptor
. -
Set desc.
bindGroupLayouts
to groupLayouts. -
Return device.
createPipelineLayout()
(desc).
This fills the pipeline layout with empty bindgroups. Revisit once the behavior of empty bindgroups is specified.
12.1.3. GPUProgrammableStageDescriptor
dictionary {
GPUProgrammableStageDescriptor required GPUShaderModule ;
module required USVString ; };
entryPoint
A GPUProgrammableStageDescriptor
describes the entry point in the user-provided GPUShaderModule
that controls one of the programmable stages of a pipeline.
-
GPUShaderStage
stage -
GPUProgrammableStageDescriptor
descriptor -
GPUPipelineLayout
layout
-
If the descriptor.
module
is not a validGPUShaderModule
return false. -
If the descriptor.
module
doesn’t contain an entry point at stage named descriptor.entryPoint
return false. -
For each binding that is statically used by the shader entry point, if the result of validating shader binding(binding, layout) is false, return false.
-
Return true.
-
shader binding, reflected from the shader module
-
GPUPipelineLayout
layout
Consider the shader binding annotation of bindIndex for the binding index and bindGroup for the bind group index.
Return true if all of the following conditions are satisfied:
-
layout.
[[bindGroupLayouts]]
[bindGroup] contains aGPUBindGroupLayoutEntry
entry whose entry.binding
== bindIndex. -
If entry.
type
is"sampler"
, the binding has to be a non-comparison sampler. -
If entry.
type
is"comparison-sampler"
, the binding has to be a comparison sampler. -
If entry.
type
is"sampled-texture"
, the binding has to be a sampled texture with the component type of entry.textureComponentType
, and it must be multisampled if and only if entry.multisampled
is true. -
If entry.
type
is"readonly-storage-texture"
, the binding has to be a read-only storage texture with format of entry.storageTextureFormat
. -
If entry.
type
is"writeonly-storage-texture"
, the binding has to be a writable storage texture with format of entry.storageTextureFormat
. -
If entry.
type
is"uniform-buffer"
, the binding has to be a uniform buffer. -
If entry.
type
is"storage-buffer"
, the binding has to be a storage buffer. -
If entry.
type
is"readonly-storage-buffer"
, the binding has to be a read-only storage buffer. -
If entry.
type
is"sampled-texture"
,"readonly-storage-texture"
, or"writeonly-storage-texture"
, the shader view dimension of the texture has to match entry.viewDimension
.
is there a match/switch statement in bikeshed?
A resource binding is considered to be statically used by a shader entry point if and only if it’s reachable by the control flow graph of the shader module, starting at the entry point.
12.2. GPUComputePipeline
A GPUComputePipeline
is a kind of pipeline that controls the compute shader stage,
and can be used in GPUComputePassEncoder
.
Compute inputs and outputs are all contained in the bindings,
according to the given GPUPipelineLayout
.
The outputs correspond to "storage-buffer"
and "writeonly-storage-texture"
binding types.
Stages of a compute pipeline:
-
Compute shader
[Serializable ]interface { };
GPUComputePipeline GPUComputePipeline includes GPUObjectBase ;GPUComputePipeline includes GPUPipelineBase ;
12.2.1. Creation
dictionary :
GPUComputePipelineDescriptor GPUPipelineDescriptorBase {required GPUProgrammableStageDescriptor ; };
computeStage
12.2.2. GPUDevice.createComputePipeline(GPUComputePipelineDescriptor)
-
GPUComputePipelineDescriptor
descriptor
Returns: GPUComputePipeline
.
The createComputePipeline(descriptor)
method is used to create GPUComputePipeline
s.
If any of the conditions below are violated:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Create a new invalid
GPUComputePipeline
and return the result.
-
Ensure the
GPUDevice
is not lost. -
Ensure the descriptor.
layout
is a validGPUPipelineLayout
. -
Ensure the validating GPUProgrammableStageDescriptor(
COMPUTE
, descriptor.computeStage
, descriptor.layout
) succeeds.
12.3. GPURenderPipeline
A GPURenderPipeline
is a kind of pipeline that controls the vertex
and fragment shader stages, and can be used in GPURenderPassEncoder
as well as GPURenderBundleEncoder
.
Render pipeline inputs are:
-
bindings, according to the given
GPUPipelineLayout
-
vertex and index buffers, described by
GPUVertexStateDescriptor
-
the color attachments, described by
GPUColorStateDescriptor
-
optionally, the depth-stencil attachment, described by
GPUDepthStencilStateDescriptor
Render pipeline outputs are:
-
bindings of types
"storage-buffer"
and"writeonly-storage-texture"
-
the color attachments, described by
GPUColorStateDescriptor
-
optionally, depth-stencil attachment, described by
GPUDepthStencilStateDescriptor
Stages of a render pipeline:
-
Vertex fetch, controlled by
GPUVertexStateDescriptor
-
Vertex shader
-
Primitive assembly, controlled by
GPUPrimitiveTopology
-
Rasterization, controlled by
GPURasterizationStateDescriptor
-
Fragment shader
-
Stencil test and operation, controlled by
GPUDepthStencilStateDescriptor
-
Depth test and write, controlled by
GPUDepthStencilStateDescriptor
-
Output merging, controlled by
GPUColorStateDescriptor
we need a deeper description of these stages
[Serializable ]interface { };
GPURenderPipeline GPURenderPipeline includes GPUObjectBase ;GPURenderPipeline includes GPUPipelineBase ;
12.3.1. Creation
dictionary :
GPURenderPipelineDescriptor GPUPipelineDescriptorBase {required GPUProgrammableStageDescriptor ;
vertexStage GPUProgrammableStageDescriptor ;
fragmentStage required GPUPrimitiveTopology ;
primitiveTopology GPURasterizationStateDescriptor = {};
rasterizationState required sequence <GPUColorStateDescriptor >;
colorStates GPUDepthStencilStateDescriptor ;
depthStencilState GPUVertexStateDescriptor = {};
vertexState GPUSize32 = 1;
sampleCount GPUSampleMask = 0xFFFFFFFF;
sampleMask boolean =
alphaToCoverageEnabled false ; };
-
vertexStage
describes the vertex shader entry point of the pipeline -
fragmentStage
describes the fragment shader entry point of the pipeline. If it’s "null", the § 12.3.2 No Color Output mode is enabled. -
primitiveTopology
configures the primitive assembly stage of the pipeline. -
rasterizationState
configures the rasterization stage of the pipeline. -
colorStates
describes the color attachments that are written by the pipeline. -
depthStencilState
describes the optional depth-stencil attachment that is written by the pipeline. -
vertexState
configures the vertex fetch stage of the pipeline. -
sampleCount
is the number of MSAA samples that each attachment has to have. -
sampleMask
is a binary mask of MSAA samples, according to [#sample-masking]]. -
alphaToCoverageEnabled
enables the § 12.3.3 Alpha to Coverage mode.
12.3.2. No Color Output
In no-color-output mode, pipeline does not produce any color attachment outputs,
and the colorStates
is expected to be empty.
The pipeline still performs rasterization and produces depth values based on the vertex position output. The depth testing and stencil operations can still be used.
12.3.3. Alpha to Coverage
In alpha-to-coverage mode, an additional alpha-to-coverage mask of MSAA samples is generated based on the alpha component of the
fragment shader output value of the colorStates
[0].
The algorithm of producing the extra mask is platform-dependent. It guarantees that:
-
if alpha is 0.0 or less, the result is 0x0
-
if alpha is 1.0 or greater, the result is 0xFFFFFFFF
-
if alpha is greater than some other alpha1, then the produced sample mask has at least as many bits set to 1 as the mask for alpha1
12.3.4. Sample Masking
The final sample mask for a pixel is computed as: rasterization mask & sampleMask
& shader-output mask.
Only the lower sampleCount
bits of the mask are considered.
If the least-significant bit at position N of the final sample mask has value of "0", the sample color outputs (corresponding to sample N) to all attachments of the fragment shader are discarded. Also, no depth test or stencil operations are executed on the relevant samples of the depth-stencil attachment.
Note: the color output for sample N is produced by the fragment shader execution with SV_SampleIndex == N for the current pixel. If the fragment shader doesn’t use this semantics, it’s only executed once per pixel.
The rasterization mask is produced by the rasterization stage, based on the shape of the rasterized polygon. The samples incuded in the shape get the relevant bits 1 in the mask.
The shader-output mask takes the output value of SV_Coverage semantics in the fragment shader.
If the semantics is not statically used by the shader, and alphaToCoverageEnabled
is enabled, the shader-output mask becomes the alpha-to-coverage mask. Otherwise, it defaults to 0xFFFFFFFF.
link to the semantics of SV_SampleIndex and SV_Coverage in WGSL spec.
12.3.5. GPUDevice.createRenderPipeline(GPURenderPipelineDescriptor)
-
GPURenderPipelineDescriptor
descriptor
Returns: GPURenderPipeline
.
The createRenderPipeline(descriptor)
method is used to create GPURenderPipeline
s.
If any of the conditions below are violated:
-
Generate a
GPUValidationError
in the current scope with appropriate error message. -
Create a new invalid
GPURenderPipeline
and return the result.
-
Ensure the
GPUDevice
is not lost. -
Ensure the descriptor.
layout
is a validGPUPipelineLayout
. -
Ensure the validating GPUProgrammableStageDescriptor(
VERTEX
, descriptor.vertexStage
, descriptor.layout
) succeeds. -
If descriptor.
fragmentStage
is not "null", ensure the validating GPUProgrammableStageDescriptor(FRAGMENT
, descriptor.fragmentStage
, descriptor.layout
) succeeds. -
Ensure the descriptor.
colorStates
.length is less than or equal to 4. -
Ensure validating GPUVertexStateDescriptor(descriptor.
vertexState
, descriptor.vertexStage
) passes. -
If descriptor.
alphaToCoverageEnabled
is true, ensure descriptor.sampleCount
is greater than 1. -
If the output SV_Coverage semantics is statically used by descriptor.
fragmentStage
, ensure descriptor.alphaToCoverageEnabled
is false.
need a proper limit for the maximum number of color targets.
need a more detailed validation of the render states.
need description of the render states.
12.3.6. Primitive Topology
enum {
GPUPrimitiveTopology ,
"point-list" ,
"line-list" ,
"line-strip" ,
"triangle-list" };
"triangle-strip"
12.3.7. Rasterization State
dictionary {
GPURasterizationStateDescriptor GPUFrontFace = "ccw";
frontFace GPUCullMode = "none";
cullMode GPUDepthBias = 0;
depthBias float = 0;
depthBiasSlopeScale float = 0; };
depthBiasClamp
enum {
GPUFrontFace ,
"ccw" };
"cw"
enum {
GPUCullMode ,
"none" ,
"front" };
"back"
12.3.8. Color State
dictionary {
GPUColorStateDescriptor required GPUTextureFormat ;
format GPUBlendDescriptor = {};
alphaBlend GPUBlendDescriptor = {};
colorBlend GPUColorWriteFlags = 0xF; // GPUColorWrite.ALL };
writeMask
typedef [EnforceRange ]unsigned long ;
GPUColorWriteFlags interface {
GPUColorWrite const GPUColorWriteFlags = 0x1;
RED const GPUColorWriteFlags = 0x2;
GREEN const GPUColorWriteFlags = 0x4;
BLUE const GPUColorWriteFlags = 0x8;
ALPHA const GPUColorWriteFlags = 0xF; };
ALL
12.3.8.1. Blend State
dictionary {
GPUBlendDescriptor GPUBlendFactor = "one";
srcFactor GPUBlendFactor = "zero";
dstFactor GPUBlendOperation = "add"; };
operation
enum {
GPUBlendFactor ,
"zero" ,
"one" ,
"src-color" ,
"one-minus-src-color" ,
"src-alpha" ,
"one-minus-src-alpha" ,
"dst-color" ,
"one-minus-dst-color" ,
"dst-alpha" ,
"one-minus-dst-alpha" ,
"src-alpha-saturated" ,
"blend-color" };
"one-minus-blend-color"
enum {
GPUBlendOperation ,
"add" ,
"subtract" ,
"reverse-subtract" ,
"min" };
"max"
enum {
GPUStencilOperation ,
"keep" ,
"zero" ,
"replace" ,
"invert" ,
"increment-clamp" ,
"decrement-clamp" ,
"increment-wrap" };
"decrement-wrap"
12.3.9. Depth/Stencil State
dictionary {
GPUDepthStencilStateDescriptor required GPUTextureFormat ;
format boolean =
depthWriteEnabled false ;GPUCompareFunction = "always";
depthCompare GPUStencilStateFaceDescriptor = {};
stencilFront GPUStencilStateFaceDescriptor = {};
stencilBack GPUStencilValue = 0xFFFFFFFF;
stencilReadMask GPUStencilValue = 0xFFFFFFFF; };
stencilWriteMask
dictionary {
GPUStencilStateFaceDescriptor GPUCompareFunction = "always";
compare GPUStencilOperation = "keep";
failOp GPUStencilOperation = "keep";
depthFailOp GPUStencilOperation = "keep"; };
passOp
12.3.10. Vertex State
enum {
GPUIndexFormat ,
"uint16" };
"uint32"
12.3.10.1. Vertex Formats
The name of the format specifies the data type of the component, the number of values, and whether the data is normalized.
-
uchar
= unsigned 8-bit value -
char
= signed 8-bit value -
ushort
= unsigned 16-bit value -
short
= signed 16-bit value -
half
= half-precision 16-bit floating point value -
float
= 32-bit floating point value -
uint
= unsigned 32-bit integer value -
int
= signed 32-bit integer value
If no number of values is given in the name, a single value is provided.
If the format has the -bgra
suffix, it means the values are arranged as
blue, green, red and alpha values.
enum {
GPUVertexFormat ,
"uchar2" ,
"uchar4" ,
"char2" ,
"char4" ,
"uchar2norm" ,
"uchar4norm" ,
"char2norm" ,
"char4norm" ,
"ushort2" ,
"ushort4" ,
"short2" ,
"short4" ,
"ushort2norm" ,
"ushort4norm" ,
"short2norm" ,
"short4norm" ,
"half2" ,
"half4" ,
"float" ,
"float2" ,
"float3" ,
"float4" ,
"uint" ,
"uint2" ,
"uint3" ,
"uint4" ,
"int" ,
"int2" ,
"int3" };
"int4"
enum {
GPUInputStepMode ,
"vertex" };
"instance"
dictionary {
GPUVertexStateDescriptor GPUIndexFormat = "uint32";
indexFormat sequence <GPUVertexBufferLayoutDescriptor ?>= []; };
vertexBuffers
A vertex buffer is, conceptually, a view into buffer memory as an array of structures. arrayStride
is the stride, in bytes, between elements of that array.
Each element of a vertex buffer is like a structure with a memory layout defined by its attributes
, which describe the members of the structure.
Each GPUVertexAttributeDescriptor
describes its format
and its offset
, in bytes, within the structure.
Each attribute appears as a separate input in a vertex shader, each bound by a numeric location,
which is specified by shaderLocation
.
Every location must be unique within the GPUVertexStateDescriptor
.
dictionary {
GPUVertexBufferLayoutDescriptor required GPUSize64 ;
arrayStride GPUInputStepMode = "vertex";
stepMode required sequence <GPUVertexAttributeDescriptor >; };
attributes
dictionary {
GPUVertexAttributeDescriptor required GPUVertexFormat ;
format required GPUSize64 ;
offset required GPUIndex32 ; };
shaderLocation
-
GPUVertexBufferLayoutDescriptor
descriptor -
GPUProgrammableStageDescriptor
vertexStage
Return true, if and only if, all of the following conditions are true:
-
descriptor.
attributes
.length is less than or equal to 16. -
descriptor.
arrayStride
is less then or equal to 2048. -
Any attribute at in the list descriptor.
attributes
has at.{{GPUVertexAttributeDescriptor/offset} + sizeOf(at.format
less or equal to descriptor.arrayStride
. -
For every vertex attribute in the shader reflection of vertexStage.
module
that is know to be statically used by vertexStage.entryPoint
, there is a corresponding at element of descriptor.attributes
that:-
The shader format is at.
format
. -
The shader location is at.
shaderLocation
.
-
add a limit to the number of vertex attributes
-
GPUVertexStateDescriptor
descriptor -
GPUProgrammableStageDescriptor
vertexStage
Return true, if and only if, all of the following conditions are true:
-
descriptor.
vertexBuffers
.length is less than or equal to 8 -
Each vertexBuffer layout descriptor in the list descriptor.
vertexBuffers
passes validating GPUVertexBufferLayoutDescriptor(vertexBuffer, vertexStage) -
Each at in the union of all
GPUVertexAttributeDescriptor
across descriptor.vertexBuffers
has a distinct at.shaderLocation
value.
add a limit to the number of vertex buffers
13. Command Buffers
13.1. GPUCommandBuffer
interface { };
GPUCommandBuffer GPUCommandBuffer includes GPUObjectBase ;
13.1.1. Creation
dictionary :
GPUCommandBufferDescriptor GPUObjectDescriptorBase { };
14. Command Encoding
14.1. GPUCommandEncoder
interface {
GPUCommandEncoder GPURenderPassEncoder (
beginRenderPass GPURenderPassDescriptor );
descriptor GPUComputePassEncoder (
beginComputePass optional GPUComputePassDescriptor = {});
descriptor void copyBufferToBuffer (GPUBuffer ,
source GPUSize64 ,
sourceOffset GPUBuffer ,
destination GPUSize64 ,
destinationOffset GPUSize64 );
size void copyBufferToTexture (GPUBufferCopyView ,
source GPUTextureCopyView ,
destination GPUExtent3D );
copySize void copyTextureToBuffer (GPUTextureCopyView ,
source GPUBufferCopyView ,
destination GPUExtent3D );
copySize void copyTextureToTexture (GPUTextureCopyView ,
source GPUTextureCopyView ,
destination GPUExtent3D );
copySize void pushDebugGroup (USVString );
groupLabel void popDebugGroup ();void insertDebugMarker (USVString );
markerLabel void (
resolveQuerySet GPUQuerySet ,
querySet GPUSize32 ,
firstQuery GPUSize32 ,
queryCount GPUBuffer ,
destination GPUSize64 );
destinationOffset GPUCommandBuffer finish (optional GPUCommandBufferDescriptor = {}); };
descriptor GPUCommandEncoder includes GPUObjectBase ;
GPUCommandEncoder
has the following internal slots:
[[state]]
of typeencoder state
.-
The current state of the
GPUCommandEncoder
, initially set toopen
. [[debug_group_stack]]
of typesequence<USVString>
.-
A stack of active debug group labels.
Each GPUCommandEncoder
has a current encoder state
on the Content timeline which may be one of the following:
- "
open
" -
Indicates the
GPUCommandEncoder
is available to begin new operations. The[[state]]
isopen
any time theGPUCommandEncoder
is valid and has no activeGPURenderPassEncoder
orGPUComputePassEncoder
. - "
encoding a render pass
" -
Indicates the
GPUCommandEncoder
has an activeGPURenderPassEncoder
. The[[state]]
becomesencoding a render pass
oncebeginRenderPass()
is called sucessfully untilendPass()
is called on the returnedGPURenderPassEncoder
, at which point the[[state]]
(if the encoder is still valid) reverts toopen
. - "
encoding a compute pass
" -
Indicates the
GPUCommandEncoder
has an activeGPUComputePassEncoder
. The[[state]]
becomesencoding a compute pass
oncebeginComputePass()
is called sucessfully untilendPass()
is called on the returnedGPUComputePassEncoder
, at which point the[[state]]
(if the encoder is still valid) reverts toopen
. - "
closed
" -
Indicates the
GPUCommandEncoder
is no longer available for any operations. The[[state]]
becomesclosed
oncefinish()
is called or theGPUCommandEncoder
otherwise becomes invalid.
14.1.1. Creation
dictionary :
GPUCommandEncoderDescriptor GPUObjectDescriptorBase { // TODO: reusability flag? };
14.2. Copy Commands
14.2.1. GPUTextureDataLayout
dictionary GPUTextureDataLayout {GPUSize64 = 0;
offset required GPUSize32 bytesPerRow ;GPUSize32 rowsPerImage = 0; };
A GPUTextureDataLayout
is a layout of images within some linear memory.
It’s used when copying data between a texture and a buffer, or when scheduling a
write into a texture from the GPUQueue
.
-
For
2d
textures, data is copied between one or multiple contiguous images and array layers. -
For
3d
textures, data is copied between one or multiple contiguous images and depth slices.
Define images more precisely. In particular, define them as being comprised of texel blocks.
Define the exact copy semantics, by reference to common algorithms shared by the copy methods.
bytesPerRow
, of type GPUSize32-
The stride, in bytes, between the beginning of each row of texel blocks and the subsequent row.
rowsPerImage
, of type GPUSize32, defaulting to0
-
rowsPerImage
÷ texel block height ×bytesPerRow
is the stride, in bytes, between the beginning of each image of data and the subsequent image.
14.2.2. GPUBufferCopyView
dictionary GPUBufferCopyView :GPUTextureDataLayout {required GPUBuffer ; };
buffer
A GPUBufferCopyView
contains the actual texture data placed in a buffer according to GPUTextureDataLayout
.
Arguments:
-
GPUBufferCopyView
bufferCopyView
Returns: boolean
Return true if and only if all of the following conditions apply:
-
bufferCopyView.
bytesPerRow
must be a multiple of 256.
14.2.3. GPUTextureCopyView
dictionary GPUTextureCopyView {required GPUTexture ;
texture GPUIntegerCoordinate = 0;
mipLevel GPUOrigin3D = {}; };
origin
A GPUTextureCopyView
is a view of a sub-region of one or multiple contiguous texture subresources with the initial
offset GPUOrigin3D
in texels, used when copying data from or to a GPUTexture
.
-
origin
: If unspecified, defaults to[0, 0, 0]
.
Arguments:
-
GPUTextureCopyView
textureCopyView
Returns: boolean
Let:
-
blockWidth be the texel block width of textureCopyView.
texture
.[[format]]
. -
blockHeight be the texel block height of textureCopyView.
texture
.[[format]]
.
Return true if and only if all of the following conditions apply:
-
textureCopyView.
texture
must be a validGPUTexture
. -
textureCopyView.
mipLevel
must be less than the[[mipLevelCount]]
of textureCopyView.texture
.
Define the copies with 1d
and 3d
textures. <https://github.com/gpuweb/gpuweb/issues/69>
14.2.4. GPUImageBitmapCopyView
dictionary GPUImageBitmapCopyView {required ImageBitmap ;
imageBitmap GPUOrigin2D = {}; };
origin
-
origin
: If unspecified, defaults to[0, 0]
.
14.2.5. copyBufferToBuffer(source, sourceOffset, destination, destinationOffset, size)
Arguments:
-
GPUBuffer
source -
GPUSize64
sourceOffset -
GPUBuffer
destination -
GPUSize64
destinationOffset -
GPUSize64
size
Returns: void
Encode a command into the GPUCommandEncoder
that copies size bytes of data from the sourceOffset of a GPUBuffer
source to the destinationOffset of another GPUBuffer
destination.
Given a GPUCommandEncoder
encoder and the arguments GPUBuffer
source, GPUSize64
sourceOffset, GPUBuffer
destination, GPUSize64
destinationOffset, GPUSize64
size, the following validation rules apply:
-
size must be a multiple of 4.
-
sourceOffset must be a multiple of 4.
-
destinationOffset must be a multiple of 4.
-
(sourceOffset + size) must not overflow a
GPUSize64
. -
(destinationOffset + size) must not overflow a
GPUSize64
. -
The
[[size]]
of source must be greater than or equal to (sourceOffset + size). -
The
[[size]]
of destination must be greater than or equal to (destinationOffset + size). -
source and destination must not be the same
GPUBuffer
.
Define the state machine for GPUCommandEncoder. <https://github.com/gpuweb/gpuweb/issues/21>
figure out how to handle overflows in the spec. <https://github.com/gpuweb/gpuweb/issues/69>
14.2.6. Copy Between Buffer and Texture
WebGPU provides copyBufferToTexture()
for buffer-to-texture copies and copyTextureToBuffer()
for texture-to-buffer copies.
The following definitions and validation rules apply to both copyBufferToTexture()
and copyTextureToBuffer()
.
textureCopyView subresource size and Valid Texture Copy Range also applies to copyTextureToTexture()
.
textureCopyView subresource size
Arguments:
-
GPUTextureCopyView
textureCopyView
Returns:
The textureCopyView subresource size of textureCopyView is calculated as follows:
Its width, height and depth are the width, height, and depth, respectively,
of the physical size of textureCopyView.texture
subresource at mipmap level textureCopyView.mipLevel
.
define this as an algorithm with (texture, mipmapLevel) parameters and use the call syntax instead of referring to the definition by label.
Arguments:
-
GPUTextureDataLayout
layout of the linear texture data -
GPUSize64
byteSize - total size of the linear data, in bytes -
GPUTextureFormat
format of the texture -
GPUExtent3D
copyExtent - extent of the texture to copy
Let:
-
blockWidth be the texel block width of format.
-
blockHeight be the texel block height of format.
-
blockSize be the texel block size of format.
-
bytesInACompleteRow be blockSize × copyExtent.width ÷ blockWidth.
-
requiredBytesInCopy be calculated with the following algorithm assuming all the parameters are valid:
if (copyExtent.width == 0 || copyExtent.height == 0 || copyExtent.depth == 0) { requiredBytesInCopy = 0; } else { GPUSize64 texelBlockRowsPerImage = layout.rowsPerImage / blockHeight; GPUSize64 bytesPerImage = layout.bytesPerRow * texelBlockRowsPerImage; GPUSize64 bytesInLastSlice = layout.bytesPerRow * (copyExtent.height / blockHeight - 1) + (copyExtent.width / blockWidth * blockSize); requiredBytesInCopy = bytesPerImage * (copyExtent.depth - 1) + bytesInLastSlice; }
The following validation rules apply:
For the copy being in-bounds:
-
If layout.
rowsPerImage
is not 0, it must be greater than or equal to copyExtent.height. -
(layout.
offset
+ requiredBytesInCopy) must not overflow aGPUSize64
. -
(layout.
offset
+ requiredBytesInCopy) must be smaller than or equal to byteSize.
For the texel block alignments:
-
layout.
rowsPerImage
must be a multiple of blockHeight. -
layout.
offset
must be a multiple of blockSize.
For other members in layout:
-
If copyExtent.height is greater than 1:
-
layout.
bytesPerRow
must be greater than or equal to the number of bytesInACompleteRow. -
If copyExtent.depth is greater than 1:
-
layout.
rowsPerImage
must be greater than or equal to copyExtent.height.
Valid Texture Copy Range
Given a GPUTextureCopyView
textureCopyView and a GPUExtent3D
copySize, let
-
blockWidth be the texel block width of textureCopyView.
texture
.[[format]]
. -
blockHeight be the texel block height of textureCopyView.
texture
.[[format]]
.
The following validation rules apply:
-
If the
[[dimension]]
of textureCopyView.texture
is1d
: -
If the
[[dimension]]
of textureCopyView.texture
is2d
: -
copySize.width must be a multiple of blockWidth.
-
copySize.height must be a multiple of blockHeight.
Define the copies with 1d
and 3d
textures. <https://github.com/gpuweb/gpuweb/issues/69>
Additional restrictions on rowsPerImage if needed. <https://github.com/gpuweb/gpuweb/issues/537>
Define the copies with "depth24plus"
and "depth24plus-stencil8"
. <https://github.com/gpuweb/gpuweb/issues/652>
convert "Valid Texture Copy Range" into an algorithm with parameters, similar to "validating linear texture data"
14.2.6.1. copyBufferToTexture(source, destination, copySize)
Arguments:
-
GPUBufferCopyView
source -
GPUTextureCopyView
destination -
GPUExtent3D
copySize
Returns: void
Encode a command into the GPUCommandEncoder
that copies data from a sub-region of a GPUBuffer
to a sub-region of one or multiple continuous GPUTexture
subresources.
source and copySize define the region of the source buffer.
destination and copySize define the region of the destination texture subresource.
copyBufferToTexture Valid Usage
Given a GPUCommandEncoder
encoder and the arguments GPUBufferCopyView
source, GPUTextureCopyView
destination and GPUExtent3D
copySize, the following validation rules apply:
For encoder:
For source:
-
validating GPUBufferCopyView(source) returns true.
For destination:
-
validating GPUTextureCopyView(destination) returns true.
-
destination.
texture
.[[textureUsage]]
must containCOPY_DST
. -
destination.
texture
.[[sampleCount]]
must be 1.
For the copy ranges:
-
validating linear texture data(source, source.
buffer
.[[size]]
, destination.texture
.[[format]]
, copySize) succeeds. -
Valid Texture Copy Range applies to destination and copySize.
14.2.6.2. copyTextureToBuffer(source, destination, copySize)
Arguments:
-
GPUTextureCopyView
source -
GPUBufferCopyView
destination -
GPUExtent3D
copySize
Returns: void
Encode a command into the GPUCommandEncoder
that copies data from a sub-region of one or multiple continuous GPUTexture
subresourcesto a sub-region of a GPUBuffer
.
source and copySize define the region of the source texture subresource.
destination and copySize define the region of the destination buffer.
copyTextureToBuffer Valid Usage
Given a GPUCommandEncoder
encoder and the arguments GPUTextureCopyView
source, GPUBufferCopyView
destination, GPUExtent3D
copySize, the following validation rules apply:
For encoder:
For source:
-
validating GPUTextureCopyView(source) returns true.
-
source.
texture
.[[textureUsage]]
must containCOPY_SRC
. -
source.
texture
.[[sampleCount]]
must be 1.
For destination:
-
validating GPUBufferCopyView(destination) returns true.
For the copy ranges:
-
validating linear texture data(destination, destination.
buffer
.[[size]]
, source.texture
.[[format]]
, copySize) succeeds. -
Valid Texture Copy Range applies to source and copySize.
14.2.7. copyTextureToTexture(source, destination, copySize)
Arguments:
-
GPUTextureCopyView
source -
GPUTextureCopyView
destination -
GPUExtent3D
copySize
Returns: void
Encode a command into the GPUCommandEncoder
that copies data from a sub-region of one
or multiple contiguous GPUTexture
subresources to another sub-region of one or
multiple continuous GPUTexture
subresources.
source and copySize define the region of the source texture subresources.
destination and copySize define the region of the destination texture subresources.
copyTextureToTexture Valid Usage
Given a GPUCommandEncoder
encoder and the arguments GPUTextureCopyView
source, GPUTextureCopyView
destination, GPUExtent3D
copySize, let:
-
A copy of the whole subresource be the command encoder.
copyTextureToTexture()
whose parameters source, destination and copySize meet the following conditions:-
The textureCopyView subresource size of source must be equal to copySize.
-
The textureCopyView subresource size of destination must be equal to copySize.
-
The following validation rules apply:
For encoder:
For source:
-
validating GPUTextureCopyView(source) returns true.
-
source.
texture
.[[textureUsage]]
must containCOPY_SRC
.
For destination:
-
validating GPUTextureCopyView(destination) returns true.
-
destination.
texture
.[[textureUsage]]
must containCOPY_DST
.
For the texture [[sampleCount]]
:
-
source.
texture
.[[sampleCount]]
must be equal to destination.texture
.[[sampleCount]]
. -
If source.
texture
.[[sampleCount]]
is greater than 1:-
The copy with source, destination and copySize must be a copy of the whole subresource.
-
For the texture [[format]]
:
-
source.
texture
.[[format]]
must be equal to destination.texture
.[[format]]
. -
If source.
texture
.[[format]]
is a depth-stencil format:-
The copy with source, destination and copySize must be a copy of the whole subresource.
-
For the copy ranges:
-
Valid Texture Copy Range applies to source and copySize.
-
Valid Texture Copy Range applies to destination and copySize.
-
The set of subresources for texture copy(source, copySize) and the set of subresources for texture copy(destination, copySize) must be disjoint.
-
If textureCopyView.
texture
.[[dimension]]
is"2d"
:-
For each arrayLayer of the copySize.depth array layers starting at textureCopyView.
origin
.z:-
The subresource of textureCopyView.
texture
at mipmap level textureCopyView.mipLevel
and array layer arrayLayer.
-
-
-
Otherwise:
-
The subresource of textureCopyView.
texture
at mipmap level textureCopyView.mipLevel
.
-
14.3. Debug Markers
Both command encoders and programmable pass encoders provide methods to apply debug labels to groups of commands or insert a single label into the command sequence. Debug groups can be nested to create a hierarchy of labeled commands. These labels may be passed to the native API backends for tooling, may be used by the user agent’s internal tooling, or may be a no-op when such tooling is not available or applicable.
Debug groups in a GPUCommandEncoder
or GPUProgrammablePassEncoder
must be well nested.
14.3.1. pushDebugGroup(groupLabel)
GPUCommandEncoder
.
Arguments:
-
USVString
groupLabel
Returns: void
Marks the beginning of a labeled group of commands for the GPUCommandEncoder
.
groupLabel defines the label for the command group.
On the Device timeline, the following steps occur:
-
If the Valid Usage rules are met:
-
push groupLabel onto then end of this.
[[debug_group_stack]]
.
-
14.3.2. popDebugGroup()
GPUCommandEncoder
.
Returns: void
Marks the end of a labeled group of commands for the GPUCommandEncoder
.
On the Device timeline, the following steps occur:
-
If the Valid Usage rules are met:
-
pop an entry off the end of this.
[[debug_group_stack]]
.
-
-
this.
[[debug_group_stack]]
.length must be greater than 0.
14.3.3. insertDebugMarker(markerLabel)
GPUCommandEncoder
.
Arguments:
-
USVString
markerLabel
Returns: void
Inserts a single debug marker label into the GPUCommandEncoder
's commands sequence .
markerLabel defines the label to insert.
14.4. Finalization
A GPUCommandBuffer
containing the commands recorded by the GPUCommandEncoder
can be created
by calling finish()
. Once finish()
has been called the
command encoder can no longer be used.
14.4.1. finish(descriptor)
GPUCommandEncoder
.
Arguments:
-
optional
GPUCommandBufferDescriptor
descriptor = {}
Returns: GPUCommandBuffer
Completes recording of the commands sequence and returns a corresponding GPUCommandBuffer
.
15. Programmable Passes
interface mixin {
GPUProgrammablePassEncoder void (
setBindGroup GPUIndex32 ,
index GPUBindGroup ,
bindGroup optional sequence <GPUBufferDynamicOffset >= []);
dynamicOffsets void (
setBindGroup GPUIndex32 ,
index GPUBindGroup ,
bindGroup Uint32Array ,
dynamicOffsetsData GPUSize64 ,
dynamicOffsetsDataStart GPUSize32 );
dynamicOffsetsDataLength void pushDebugGroup (USVString );
groupLabel void popDebugGroup ();void insertDebugMarker (USVString );
markerLabel void (
beginPipelineStatisticsQuery GPUQuerySet ,
querySet GPUSize32 );
queryIndex void (
endPipelineStatisticsQuery GPUQuerySet ,
querySet GPUSize32 ); };
queryIndex
GPUProgrammablePassEncoder
has the following internal slots:
[[debug_group_stack]]
of typesequence<USVString>
.-
A stack of active debug group labels.
15.1. Debug Markers
Debug marker methods for programmable pass encoders provide the same functionality as command encoder debug markers while recording a programmable pass.
15.1.1. pushDebugGroup(groupLabel)
GPUProgrammablePassEncoder
.
Arguments:
-
USVString
groupLabel
Returns: void
Marks the beginning of a labeled group of commands for the GPUProgrammablePassEncoder
.
groupLabel defines the label for the command group.
On the Device timeline, the following steps occur:
-
push groupLabel onto then end of this.
[[debug_group_stack]]
.
15.1.2. popDebugGroup()
GPUProgrammablePassEncoder
.
Returns: void
Marks the end of a labeled group of commands for the GPUProgrammablePassEncoder
.
On the Device timeline, the following steps occur:
-
If the Valid Usage rules are met:
-
pop an entry off the end of this.
[[debug_group_stack]]
.
-
-
this.
[[debug_group_stack]]
.length must be greater than 0.
15.1.3. insertDebugMarker(markerLabel)
Arguments:
-
USVString
markerLabel
Returns: void
Inserts a single debug marker label into the GPUProgrammablePassEncoder
's commands sequence .
markerLabel defines the label to insert.
16. Compute Passes
16.1. GPUComputePassEncoder
interface {
GPUComputePassEncoder void (
setPipeline GPUComputePipeline );
pipeline void (
dispatch GPUSize32 ,
x optional GPUSize32 = 1,
y optional GPUSize32 = 1);
z void (
dispatchIndirect GPUBuffer ,
indirectBuffer GPUSize64 );
indirectOffset void endPass (); };GPUComputePassEncoder includes GPUObjectBase ;GPUComputePassEncoder includes GPUProgrammablePassEncoder ;
16.1.1. Creation
dictionary :
GPUComputePassDescriptor GPUObjectDescriptorBase { };
16.2. Finalization
The compute pass encoder can be ended by calling endPass()
once the user
has finished recording commands for the pass. Once endPass()
has been
called the compute pass encoder can no longer be used.
16.2.1. endPass()
GPUComputePassEncoder
.
Returns: void
Completes recording of the compute pass commands sequence.
17. Render Passes
17.1. GPURenderPassEncoder
interface mixin {
GPURenderEncoderBase void (
setPipeline GPURenderPipeline );
pipeline void (
setIndexBuffer GPUBuffer ,
buffer optional GPUSize64 = 0,
offset optional GPUSize64 = 0);
size void (
setVertexBuffer GPUIndex32 ,
slot GPUBuffer ,
buffer optional GPUSize64 = 0,
offset optional GPUSize64 = 0);
size void (
draw GPUSize32 ,
vertexCount optional GPUSize32 = 1,
instanceCount optional GPUSize32 = 0,
firstVertex optional GPUSize32 = 0);
firstInstance void (
drawIndexed GPUSize32 ,
indexCount optional GPUSize32 = 1,
instanceCount optional GPUSize32 = 0,
firstIndex optional GPUSignedOffset32 = 0,
baseVertex optional GPUSize32 = 0);
firstInstance void (
drawIndirect GPUBuffer ,
indirectBuffer GPUSize64 );
indirectOffset void (
drawIndexedIndirect GPUBuffer ,
indirectBuffer GPUSize64 ); };
indirectOffset interface {
GPURenderPassEncoder void (
setViewport float ,
x float ,
y float ,
width float ,
height float ,
minDepth float );
maxDepth void (
setScissorRect GPUIntegerCoordinate ,
x GPUIntegerCoordinate ,
y GPUIntegerCoordinate ,
width GPUIntegerCoordinate );
height void (
setBlendColor GPUColor );
color void (
setStencilReference GPUStencilValue );
reference void (
beginOcclusionQuery GPUSize32 );
queryIndex void (
endOcclusionQuery GPUSize32 );
queryIndex void (
executeBundles sequence <GPURenderBundle >);
bundles void endPass (); };GPURenderPassEncoder includes GPUObjectBase ;GPURenderPassEncoder includes GPUProgrammablePassEncoder ;GPURenderPassEncoder includes GPURenderEncoderBase ;
-
setIndexBuffer()
/setVertexBuffer()
:-
If
size
is zero, the remaining size (afteroffset
) of theGPUBuffer
is used.
-
-
In indirect draw calls, the base instance field (inside the indirect buffer data) must be set to zero.
-
-
An error is generated if
width
orheight
is not greater than 0.
-
When a GPURenderPassEncoder
is created, it has the following default state:
-
Viewport:
-
x, y
=0.0, 0.0
-
width, height
= the dimensions of the pass’s render targets -
minDepth, maxDepth
=0.0, 1.0
-
-
Scissor rectangle:
-
x, y
=0, 0
-
width, height
= the dimensions of the pass’s render targets
-
When a GPURenderBundle
is executed, it does not inherit the pass’s pipeline,
bind groups, or vertex or index buffers. After a GPURenderBundle
has executed,
the pass’s pipeline, bind groups, and vertex and index buffers are cleared. If zero GPURenderBundle
s are executed, the command buffer state is unchanged.
17.1.1. Creation
dictionary :
GPURenderPassDescriptor GPUObjectDescriptorBase {required sequence <GPURenderPassColorAttachmentDescriptor >;
colorAttachments GPURenderPassDepthStencilAttachmentDescriptor ;
depthStencilAttachment GPUQuerySet ; };
occlusionQuerySet
17.1.1.1. Color Attachments
dictionary {
GPURenderPassColorAttachmentDescriptor required GPUTextureView ;
attachment GPUTextureView ;
resolveTarget required (GPULoadOp or GPUColor );
loadValue GPUStoreOp = "store"; };
storeOp
17.1.1.2. Depth/Stencil Attachments
dictionary {
GPURenderPassDepthStencilAttachmentDescriptor required GPUTextureView ;
attachment required (GPULoadOp or float );
depthLoadValue required GPUStoreOp ;
depthStoreOp boolean =
depthReadOnly false ;required (GPULoadOp or GPUStencilValue );
stencilLoadValue required GPUStoreOp ;
stencilStoreOp boolean =
stencilReadOnly false ; };
17.1.2. Load & Store Operations
enum {
GPULoadOp };
"load"
enum {
GPUStoreOp ,
"store" };
"clear"
17.2. Finalization
The render pass encoder can be ended by calling endPass()
once the user
has finished recording commands for the pass. Once endPass()
has been
called the render pass encoder can no longer be used.
17.2.1. endPass()
GPURenderPassEncoder
.
Returns: void
Completes recording of the compute pass commands sequence.
18. Bundles
18.1. GPURenderBundle
interface { };
GPURenderBundle GPURenderBundle includes GPUObjectBase ;
18.1.1. Creation
dictionary :
GPURenderBundleDescriptor GPUObjectDescriptorBase { };
interface {
GPURenderBundleEncoder GPURenderBundle (
finish optional GPURenderBundleDescriptor = {}); };
descriptor GPURenderBundleEncoder includes GPUObjectBase ;GPURenderBundleEncoder includes GPUProgrammablePassEncoder ;GPURenderBundleEncoder includes GPURenderEncoderBase ;
18.1.2. Encoding
dictionary :
GPURenderBundleEncoderDescriptor GPUObjectDescriptorBase {required sequence <GPUTextureFormat >;
colorFormats GPUTextureFormat ;
depthStencilFormat GPUSize32 = 1; };
sampleCount
19. Queues
interface {
GPUQueue void submit (sequence <GPUCommandBuffer >);
commandBuffers GPUFence (
createFence optional GPUFenceDescriptor = {});
descriptor void (
signal GPUFence ,
fence GPUFenceValue );
signalValue void writeBuffer (GPUBuffer ,
buffer GPUSize64 ,
bufferOffset ArrayBuffer ,
data optional GPUSize64 = 0,
dataOffset optional GPUSize64 );
size void writeTexture (GPUTextureCopyView ,
destination ArrayBuffer ,
data GPUTextureDataLayout ,
dataLayout GPUExtent3D );
size void copyImageBitmapToTexture (GPUImageBitmapCopyView ,
source GPUTextureCopyView ,
destination GPUExtent3D ); };
copySize GPUQueue includes GPUObjectBase ;
writeBuffer(buffer, bufferOffset, data, dataOffset, size)
-
Takes the data contents of size size, starting from the byte offset dataOffset, and schedules a write operation of these contents to the buffer buffer on the Queue timeline starting at bufferOffset. Any subsequent modifications to data do not affect what is written at the time that the scheduled operation runs.
If size is 0, it is set to data.byteLength - dataOffset if the result is non-negative, or throws
OperationError
otherwise.The operation throws
OperationError
if any of the following is true:-
buffer buffer isn’t in the
"unmapped"
buffer state. -
bufferOffset is not a multiple of 4.
-
size is not a positive multiple of 4.
-
dataOffset + size exceeds data.byteLength.
The operation does nothing and produces an error if any of the following is true:
-
writeTexture(destination, data, dataLayout, size)
-
Takes the data contents and schedules a write operation of these contents to the destination texture copy view in the queue. Any subsequent modifications to data do not affect what is written at the time that the scheduled operation runs.
The operation throws
OperationError
if dataLayout.offset
exceeds data.byteLength.The operation does nothing and produces an error if any of the following is true:
-
destination.
texture
.[[textureUsage]]
doesn’t includeCOPY_DST
flag. -
destination.
texture
is destroyed -
validating linear texture data(dataLayout, data.byteLength, destination.
texture
.[[format]]
, size) fails. -
Valid Texture Copy Range fails to apply to destination and size.
Note: unlike
GPUCommandEncoder.copyBufferToTexture
, there is no alignment requirement on dataLayout.bytesPerRow
. -
copyImageBitmapToTexture(source, destination, copySize)
-
Schedules a copy operation of the contents of an image bitmap into the destination texture.
The operation throws
OperationError
if any of the following any of the following requirements are unmet: submit(commandBuffers)
-
Schedules the execution of the command buffers by the GPU on this queue.
Does nothing and produces an error if any of the following is true:
-
Any
GPUBuffer
referenced in any element ofcommandBuffers
isn’t in the"unmapped"
buffer state. -
Any of the usage scopes contained in the command buffers fail the usage scope validation.
-
19.1. GPUFence
interface {
GPUFence GPUFenceValue ();
getCompletedValue Promise <void >(
onCompletion GPUFenceValue ); };
completionValue GPUFence includes GPUObjectBase ;
19.1.1. Creation
dictionary :
GPUFenceDescriptor GPUObjectDescriptorBase {GPUFenceValue = 0; };
initialValue
20. Queries
20.1. QuerySet
interface {
GPUQuerySet void (); };
destroy GPUQuerySet includes GPUObjectBase ;
20.1.1. Creation
dictionary :
GPUQuerySetDescriptor GPUObjectDescriptorBase {required GPUQueryType ;
type required GPUSize32 ;
count sequence <GPUPipelineStatisticName >pipelineStatistics = []; };
pipelineStatistics
, of type sequence<GPUPipelineStatisticName>, defaulting to[]
-
The set of
GPUPipelineStatisticName
values in this sequence defines which pipeline statistics will be returned in the new query set. -
pipelineStatistics is ignored if type is not
pipeline-statistics
. -
If
pipeline-statistics-query
is not available, type must not bepipeline-statistics
. -
If type is
pipeline-statistics
, pipelineStatistics must be a sequence ofGPUPipelineStatisticName
values which cannot be duplicated.
20.2. QueryType
enum {
GPUQueryType ,
"occlusion" };
"pipeline-statistics"
20.3. Pipeline Statistics Query
enum {
GPUPipelineStatisticName ,
"vertex-shader-invocations" ,
"clipper-invocations" ,
"clipper-primitives-out" ,
"fragment-shader-invocations" };
"compute-shader-invocations"
When resolving pipeline statistics query, each result is written into uint64, and the number and order of the results written to GPU buffer matches the number and order of GPUPipelineStatisticName
specified in pipelineStatistics
.
21. Canvas Rendering & Swap Chains
interface {
GPUCanvasContext GPUSwapChain (
configureSwapChain GPUSwapChainDescriptor );
descriptor Promise <GPUTextureFormat >(
getSwapChainPreferredFormat GPUDevice ); };
device
-
configureSwapChain()
: Configures the swap chain for this canvas, and returns a newGPUSwapChain
object representing it. Destroys any swapchain previously returned byconfigureSwapChain
, including all of the textures it has produced.
dictionary :
GPUSwapChainDescriptor GPUObjectDescriptorBase {required GPUDevice ;
device required GPUTextureFormat ;
format GPUTextureUsageFlags = 0x10; // GPUTextureUsage.OUTPUT_ATTACHMENT };
usage
interface {
GPUSwapChain GPUTexture (); };
getCurrentTexture GPUSwapChain includes GPUObjectBase ;
In the "update the rendering [of the] Document
" step of the "Update the rendering" HTML processing
model, the contents of the GPUTexture
most recently returned by getCurrentTexture()
are used to update the rendering for the canvas
, and it is as
if destroy()
were called on it (making it unusable elsewhere in WebGPU).
Before this drawing buffer is presented for compositing, the implementation shall ensure that all rendering operations have been flushed to the drawing buffer.
22. Errors & Debugging
22.1. Fatal Errors
interface {
GPUDeviceLostInfo readonly attribute DOMString ; };
message partial interface GPUDevice {readonly attribute Promise <GPUDeviceLostInfo >; };
lost
22.2. Error Scopes
enum {
GPUErrorFilter ,
"none" ,
"out-of-memory" };
"validation"
interface {
GPUOutOfMemoryError (); };
constructor interface {
GPUValidationError (
constructor DOMString );
message readonly attribute DOMString ; };
message typedef (GPUOutOfMemoryError or GPUValidationError );
GPUError
partial interface GPUDevice {void (
pushErrorScope GPUErrorFilter );
filter Promise <GPUError ?>(); };
popErrorScope
popErrorScope()
throws OperationError
if there are no error scopes on the stack. popErrorScope()
rejects with OperationError
if the device is lost.
22.3. Telemetry
[Exposed =(Window ,DedicatedWorker ) ]interface :
GPUUncapturedErrorEvent Event {(
constructor DOMString ,
type GPUUncapturedErrorEventInit ); [
gpuUncapturedErrorEventInitDict SameObject ]readonly attribute GPUError ; };
error dictionary :
GPUUncapturedErrorEventInit EventInit {required GPUError ; };
error
partial interface GPUDevice { [Exposed =(Window ,DedicatedWorker )]attribute EventHandler ; };
onuncapturederror
23. Type Definitions
typedef [EnforceRange ]unsigned long ;
GPUBufferDynamicOffset typedef [EnforceRange ]unsigned long long ;
GPUFenceValue typedef [EnforceRange ]unsigned long ;
GPUStencilValue typedef [EnforceRange ]unsigned long ;
GPUSampleMask typedef [EnforceRange ]long ;
GPUDepthBias typedef [EnforceRange ]unsigned long long ;
GPUSize64 typedef [EnforceRange ]unsigned long ;
GPUIntegerCoordinate typedef [EnforceRange ]unsigned long ;
GPUIndex32 typedef [EnforceRange ]unsigned long ;
GPUSize32 typedef [EnforceRange ]long ;
GPUSignedOffset32
23.1. Colors & Vectors
dictionary {
GPUColorDict required double ;
r required double ;
g required double ;
b required double ; };
a typedef (sequence <double >or GPUColorDict );
GPUColor
Note: double
is large enough to precisely hold 32-bit signed/unsigned
integers and single-precision floats.
dictionary {
GPUOrigin2DDict GPUIntegerCoordinate = 0;
x GPUIntegerCoordinate = 0; };
y typedef (sequence <GPUIntegerCoordinate >or GPUOrigin2DDict );
GPUOrigin2D
dictionary {
GPUOrigin3DDict GPUIntegerCoordinate = 0;
x GPUIntegerCoordinate = 0;
y GPUIntegerCoordinate = 0; };
z typedef (sequence <GPUIntegerCoordinate >or GPUOrigin3DDict );
GPUOrigin3D
An Origin3D is a GPUOrigin3D
. Origin3D is a spec namespace for the following definitions:
GPUOrigin3D
value origin, depending on its type, the syntax:
-
origin.x refers to either
GPUOrigin3DDict
.x
or the first item of the sequence. -
origin.y refers to either
GPUOrigin3DDict
.y
or the second item of the sequence. -
origin.z refers to either
GPUOrigin3DDict
.z
or the third item of the sequence.
dictionary {
GPUExtent3DDict required GPUIntegerCoordinate ;
width required GPUIntegerCoordinate ;
height required GPUIntegerCoordinate ; };
depth typedef (sequence <GPUIntegerCoordinate >or GPUExtent3DDict );
GPUExtent3D
An Extent3D is a GPUExtent3D
. Extent3D is a spec namespace for the following definitions:
GPUExtent3D
value extent, depending on its type, the syntax:
-
extent.width refers to either
GPUExtent3DDict
.width
or the first item of the sequence. -
extent.height refers to either
GPUExtent3DDict
.height
or the second item of the sequence. -
extent.depth refers to either
GPUExtent3DDict
.depth
or the third item of the sequence.
typedef sequence <(GPUBuffer or ArrayBuffer )>;
GPUMappedBuffer
GPUMappedBuffer
is always a sequence of 2 elements, of types GPUBuffer
and ArrayBuffer
, respectively.
24. Temporary usages of non-exported dfns
Eventually all of these should disappear but they are useful to avoid warning while building the specification.