-
Notifications
You must be signed in to change notification settings - Fork 335
Add shader-float16 extension #1275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@Kangz @kainino0x PTAL |
Sorry, I had known that it was not so easy to introduce an extension as this pull request. Please ignore this pull request if it is not appropriate. |
It's good to put this extension on the radar of the group, but I don't know if we'll want to take the extension name in without any specification. We'll discuss it on monday's meeting. In the meantime you can use Typescript's features to cast and ignore the type error. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this is much needed. Hopefully Tint can prototype this and see if anything weird pops up.
|
||
## <dfn dfn-type=enum-value dfn-for=GPUFeatureName>shader-float16</dfn> ## {#shader-float16} | ||
|
||
Allows 16bit float arithmetic feature in shader and 16bit storage features for GPUBuffer. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what are the "16bit storage features of GPUBuffer"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it means accessing buffer memory through 16bit types in WGSL. (but it needs to be reworded to say that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In principle this is a fine idea.
As noted, there's a lot of detail to be determined. Support in the field is very uneven, and Vulkan slices the functionality into lots of pieces, e.g.
-
some devices can compute in fp16, but not load or store individual fp16 values.
-
some devices can load or store individual fp16 values but not directly do fp16 arithmetic.
-
ability to load/store fp16 values differs by storage class. storage buffer, uniform buffer, input/output, workgroup storage.
-
Also, you'd need to specify all the places f16 can be used, and behaviour of arithmetic, and error bounds on builtins (if they apply)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can all the WGSL changes be included in this PR?
Discussed at the 2020-12-07 meeting. |
Hi, all, thank you for your comments. But we currently have no enough resource to implement fp16 extension in Tint project. |
Closing; superseded by #2696 |
…1417) This PR adds unimplmented stubs for the read-write-modify atomic operations. * `atomicAdd` * `atomicAnd` * `atomicCompareExchangeWeak` * `atomicExchange` * `atomicMax` * `atomicMin` * `atomicOr` * `atomicSub` * `atomicXor` Issue gpuweb#1275, gpuweb#1276, gpuweb#1277, gpuweb#1278, gpuweb#1279, gpuweb#1280, gpuweb#1281, gpuweb#1282, gpuweb#1283
@haoxli
Preview | Diff