Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

xhcao
Copy link

@xhcao xhcao commented Dec 4, 2020

@xhcao
Copy link
Author

xhcao commented Dec 4, 2020

@Kangz @kainino0x PTAL

@xhcao
Copy link
Author

xhcao commented Dec 4, 2020

Sorry, I had known that it was not so easy to introduce an extension as this pull request. Please ignore this pull request if it is not appropriate.
The goal was that I encountered an issue when using shader-float16 extension in tfjs. The "webgpu/types" defines GPUExtensionName as shown below,
export type GPUExtensionName =
| "texture-compression-bc"
| "timestamp-query"
| "pipeline-statistics-query"
| "depth-clamping";
There is no "shader-float16", so if I query whether device supports fp16 features as shown below, compiling error happens.
const adapter = await navigator.gpu.requestAdapter(gpuDescriptor);
let driverSupportFp16 = false;
for(let i=0; i<adapter.extensions.length; i++){
if (adapter.extensions[i] === 'shader-float16')
{
driverSupportFp16 = true;
break;
}
}

@Kangz
Copy link
Contributor

Kangz commented Dec 4, 2020

It's good to put this extension on the radar of the group, but I don't know if we'll want to take the extension name in without any specification. We'll discuss it on monday's meeting. In the meantime you can use Typescript's features to cast and ignore the type error.

Copy link
Contributor

@kvark kvark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is much needed. Hopefully Tint can prototype this and see if anything weird pops up.


## <dfn dfn-type=enum-value dfn-for=GPUFeatureName>shader-float16</dfn> ## {#shader-float16}

Allows 16bit float arithmetic feature in shader and 16bit storage features for GPUBuffer.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are the "16bit storage features of GPUBuffer"?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it means accessing buffer memory through 16bit types in WGSL. (but it needs to be reworded to say that)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In principle this is a fine idea.

As noted, there's a lot of detail to be determined. Support in the field is very uneven, and Vulkan slices the functionality into lots of pieces, e.g.

  • some devices can compute in fp16, but not load or store individual fp16 values.

  • some devices can load or store individual fp16 values but not directly do fp16 arithmetic.

  • ability to load/store fp16 values differs by storage class. storage buffer, uniform buffer, input/output, workgroup storage.

  • Also, you'd need to specify all the places f16 can be used, and behaviour of arithmetic, and error bounds on builtins (if they apply)

Copy link
Contributor

@kdashg kdashg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can all the WGSL changes be included in this PR?

@grorg
Copy link
Contributor

grorg commented Dec 7, 2020

Discussed at the 2020-12-07 meeting.

@xhcao
Copy link
Author

xhcao commented Dec 15, 2020

Hi, all, thank you for your comments. But we currently have no enough resource to implement fp16 extension in Tint project.
I had adopted Corentin's comment to work around my issue with Typescript's features.

@kainino0x
Copy link
Contributor

Closing; superseded by #2696

@kainino0x kainino0x closed this Aug 25, 2022
ben-clayton pushed a commit to ben-clayton/gpuweb that referenced this pull request Sep 6, 2022
…1417)

This PR adds unimplmented stubs for the read-write-modify atomic operations.

 * `atomicAdd`
 * `atomicAnd`
 * `atomicCompareExchangeWeak`
 * `atomicExchange`
 * `atomicMax`
 * `atomicMin`
 * `atomicOr`
 * `atomicSub`
 * `atomicXor`

Issue gpuweb#1275, gpuweb#1276, gpuweb#1277, gpuweb#1278, gpuweb#1279, gpuweb#1280, gpuweb#1281, gpuweb#1282, gpuweb#1283
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants