-
Notifications
You must be signed in to change notification settings - Fork 335
Start writing spec for device/adapter, introduce internal objects #422
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
underlying implementation. | ||
- If an extension is not supported by the user agent, | ||
it will not be present in the object. | ||
- If an extension is supported by the user agent, but |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be nice to leave the possibility for user agents to lie about supported extensions, for example to reduce fingerprinting surface in untrusted contexts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kangz , how would leaving the possibility for user agents to lie change the proposed spec text?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agree, since the [=adapter=]
can expose whatever it wants, this leaves the user agent open to "lie" by just not including all hardware capabilities in the WebGPU adapter's capabilities.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we could just say if an extensions is exposed by the user-agent and supported by the adapter then it will be true, otherwise undefined?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll tweak this.
I'm also thinking about changing the 'false' case to also expose undefined. There shouldn't be any reason that apps need to differentiate the two.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, if we feel like we must say something about lying, it would be non-normative text that says something along the lines of: "For privacy reasons, user agents may lie to you about hardware capabilities" and leave it at that.
If we start introducing new values like "undefined", then we risk sites intentionally or non-intentionally breaking when run in privacy modes ... not a good user experience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There’s “lying” and then there’s “lying.” We can make “lying” implementations intentionally diverge from the spec, or we can make the flex more flexible to allow for intentional “lying.” We should do the latter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify, do you think the spec as written is insufficiently flexible to allow lying? Or just responding to Rafael's comment?
The phrasing "supported by the [=adapter=]" does not yet exist. That is where we would say that the browser can create adapters which "support" whatever they want.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like a great start! We'll have to discuss in the next meeting what level of detail we want for the spec (between GL spec style where some assumption are made to very tight WebAudio style specs)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Darn, I had a draft going that overlaps with this one. Looks good; if you don't cover the Navigator and WorkerNavigator entry points this time, I'll upload what I have for those.
Sorry for the collision. This is actually just a slight cleanup of something I wrote weeks ago in #354 so I already had it all written, I should have pushed it as a PR much sooner. |
spec/index.bs
Outdated
<pre class='anchors'> | ||
urlPrefix: https://tc39.github.io/ecma262/; spec: ECMA-262 | ||
type: dfn | ||
text: realm; url: realm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please describe what the word "realm" means within the context of WebGPU? Are you referring to different threads? Objects belonging to different adapters? Objects belonging to different WebGPU instances? Something else? The ECMA spec definition didn't clarify things for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I'm looking at it again, I think realm is the wrong one. Agent looks like the right definition, does it clear things up?
(I'm talking about different threads (workers).)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The spec you linked describes Agents as a set of execution contexts. A thread is a means to execute jobs on the contexts. Unless I am missing something, I do not think this is quite the same as "agent == thread"
Is this "agent" different than the agent used in the term "user agent"?
If you're talking about different threads/works, why don't we just use that term instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agent is the spec language used by ECMAScript and HTML.
https://html.spec.whatwg.org/multipage/structured-data.html#transferable-objects
Transferable objects support being transferred across agents.
An agent is not the same as a user agent. It's what we colloquially refer to as a "thread" (main thread or worker).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @kainino0x . That makes sense.
underlying implementation. | ||
- If an extension is not supported by the user agent, | ||
it will not be present in the object. | ||
- If an extension is supported by the user agent, but |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Kangz , how would leaving the possibility for user agents to lie change the proposed spec text?
I won't; please do! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Finished the first pass at about 90%, notes below
</script> | ||
|
||
## Base Objects ## {#base-objects} | ||
## Internal Objects ## {#webgpu-internal-objects} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why are we introducing internal objects anyway? It seems confusing to me: it implies that some objects are not internal. Moreover, what is the value in saying that an interface exposes API to an internal object? I mean, how is it internal any more as we have a public interface to it...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Internal objects exist to be shared across "agents" (threads). IDL interfaces cannot, because they are JS objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn’t call them “internal,” then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(sorry for the slow response.) Can you elaborate - why not? They are not exposed to the web platform - that's why they're called internal (in line with "internal slot"). Should a different name be used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems confusing to me: it implies that some objects are not internal.
To clarify, the interface objects are not internal.
Moreover, what is the value in saying that an interface exposes API to an internal object? I mean, how is it internal any more as we have a public interface to it...
I was imagining the tracked state as being internal to the internal object, so it would expose the API. However I realize now that doesn't make sense - the internal object should be like a "dumb struct" whose state tracking is visible to the GPU* object which exposes it.
spec/index.bs
Outdated
Different [=adapters=] could refer to different implementations on the | ||
same physical adapter (e.g. Vulkan and D3D12), | ||
or to different instances of the same physical configuration | ||
(e.g. if the GPU were disconnected and reconnected). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why would it be a different adapter if it's just a GPU that re-connected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking the old adapter would be permanently lost, and we would return the new one if you ask again. This isn't strictly necessary (adapters don't have to be one-way the way devices do, according to ErrorHandling.md), but it keeps consistency. Imagine the following two cases:
- External GPU is unplugged and plugged back into the same port
- External GPU is unplugged and plugged back into a different port
- External GPU is unplugged and an identical one is plugged in
I see no reason that the application should want to be able to distinguish these, or should have to program against extra possibilities that behave the same way. So we either have to make these all appear as the same adapter (what happens once you have two?), or we have to make it always appear as a different adapter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the confusion comes from "Different" word here. I read it as: you have a list of things returned by a function, and this documentation explains why it has different things. But what you are trying to explain in this comment is why would a subsequent request for this list potentially return things that are different from the current request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with @kainino0x that once an adapter becomes lost, it should stay permanently lost. If the user re-plugs in an external GPU, WebGPU should create a brand new adapter object for it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated the text here; PTAL
Updated. |
: <dfn>\[[device]]</dfn>, of type [=device=], readonly | ||
:: | ||
The [=device=] that this {{GPUDevice}} refers to. | ||
</dl> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I presume the "[=device=] that this {{GPUDevice}} refers to" is meant to be an "internal slot"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I understand, but yes it is an internal slot:
{{GPUDevice}} also has the following internal slots:
@kvark (and optionally @RafaelCintron), could you review the last commit I just uploaded? After that I'd like to merge this so we can iterate more easily. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
last commit looks good
Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this close #132 ?
</script> | ||
|
||
## Base Objects ## {#base-objects} | ||
## Internal Objects ## {#webgpu-internal-objects} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wouldn’t call them “internal,” then.
underlying implementation. | ||
- If an extension is not supported by the user agent, | ||
it will not be present in the object. | ||
- If an extension is supported by the user agent, but |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There’s “lying” and then there’s “lying.” We can make “lying” implementations intentionally diverge from the spec, or we can make the flex more flexible to allow for intentional “lying.” We should do the latter.
* Add a component type for GPUBGLBinding compatiblity (#384) In shaders there are several texture types for each dimensionality depending on their component type. It can be either float, uint or sint, with maybe in the future depth/stencil if WebGPU allows reading such textures. The component type of a GPUTextureView's format must match the component type of its binding in the shader module. This is for several reasons: - Vulkan requires the following: "The Sampled Type of an OpTypeImage declaration must match the numeric format of the corresponding resource in type and signedness, as shown in the SPIR-V Sampled Type column of the Interpretation of Numeric Format table, or the values obtained by reading or sampling from this image are undefined." - It is also required in OpenGL for the texture units to be complete, a uint or sint texture unit used with a non-nearest sampler is incomplete and returns black texels. Similar constraints must exist in other APIs. To encode this compatibility constraint, a new member is added to GPUBindGroupLayoutBinding that is a new enum GPUTextureComponentType that give the component type of the texture. * Make GPUBGLBinding.textureDimension default to 2d. This is the most common case and avoids having an optional dictionary member with no default value (but that still requires a value for texture bindings). * Add a component type for GPUBGLBinding compatiblity (#384) In shaders there are several texture types for each dimensionality depending on their component type. It can be either float, uint or sint, with maybe in the future depth/stencil if WebGPU allows reading such textures. The component type of a GPUTextureView's format must match the component type of its binding in the shader module. This is for several reasons: - Vulkan requires the following: "The Sampled Type of an OpTypeImage declaration must match the numeric format of the corresponding resource in type and signedness, as shown in the SPIR-V Sampled Type column of the Interpretation of Numeric Format table, or the values obtained by reading or sampling from this image are undefined." - It is also required in OpenGL for the texture units to be complete, a uint or sint texture unit used with a non-nearest sampler is incomplete and returns black texels. Similar constraints must exist in other APIs. To encode this compatibility constraint, a new member is added to GPUBindGroupLayoutBinding that is a new enum GPUTextureComponentType that give the component type of the texture. * Make GPUBGLBinding.textureDimension default to 2d. This is the most common case and avoids having an optional dictionary member with no default value (but that still requires a value for texture bindings). * unifinished createBindGroupLayout algorithm * draft of BindGroupLayout details * draft of BindGroupLayout details * polish before PR * fix typo * replace u32/i32/u64 with normal int types or specific typedefs (#423) * Do not require vertexInput in GPURenderPipelineDescriptor (#378) * Add a default for GPURenderPassColorAttachmentDescriptor.storeOp (#376) Supersedes #268. * Initial spec for GPUDevice.createBuffer (#419) * Start writing spec for device/adapter, introduce internal objects (#422) * Move validation rules out of algorithm body and better describe GPUBindGroupLayout internal slots * Include limits for dynamic offset buffers * Rename 'dynamic' boolean to 'hasDynamicOffsets' * Fix indentation for ci bot * More indentation errors * Fix var typos * Fix method definition * Fix enum references * Missing </dfn> tag * Missing </dfn> tag * Remove bad [= =] * Fix old constant name * Half-formed new validation rule structure for createBindGroupLayout * An interface -> the interface * Remove old 'layout binding' reference * fix device lost validation reference * Fix 'dynamic' typo
#473
#474
Preview | Diff