Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[libclc] Reduce bithacking for INF/NAN values #129738

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

frasercrmck
Copy link
Contributor

This commit makes use of 'native' floating-point INF and NAN operations to implement certain patterns previously reliant on bithacking.

To aid in this, a 'gentype' version of INF and NAN were introduced to be used with users of clc/math/gentype.inc.

This commit makes use of 'native' floating-point INF and NAN operations
to implement certain patterns previously reliant on bithacking.

To aid in this, a 'gentype' version of INF and NAN were introduced to be
used with users of clc/math/gentype.inc.
@frasercrmck frasercrmck added the libclc libclc OpenCL library label Mar 4, 2025
@frasercrmck frasercrmck requested a review from arsenm March 4, 2025 16:41
Copy link

github-actions bot commented Mar 4, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

@@ -86,13 +84,10 @@ _CLC_DEF _CLC_OVERLOAD __CLC_GENTYPE __clc_hypot(__CLC_GENTYPE x,
r = c ? s : r;

// Check for NaN
c = __clc_isnan(x) || __clc_isnan(y);
r = c ? __CLC_AS_GENTYPE((__CLC_ULONGN)QNANBITPATT_DP64) : r;
r = __clc_isnan(x) || __clc_isnan(y) ? __CLC_GENTYPE_NAN : r;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't matter much, but can fold this condition into one __builtin_isunordered (or just let instcombine do it)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Above should also be rewritten with frexp and ldexp

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't matter much, but can fold this condition into one __builtin_isunordered (or just let instcombine do it)

Yeah I think I'll just let instcombine do it (which it does).

Above should also be rewritten with frexp and ldexp

Sorry, I'm not seeing which bit - could you explain? I note that libclc's frexp and ldexp are more involved than any of the bithacking here (possibly due to subnormal support?) so I'm not sure what the goal would be.

retval = (ux == PINFBITPATT_SP32 || uy == PINFBITPATT_SP32)
? __CLC_AS_GENTYPE((__CLC_UINTN)PINFBITPATT_SP32)
: retval;
retval = __clc_isinf(x) || __clc_isinf(y) ? __CLC_GENTYPE_INF : retval;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Above can be rewritten with frexp and ldexp.

The output of the temporary result also implies one or the other were infinity, you can fold this to one isinf check on the maximum of fabs x and y (which is done as integer above)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aux and auy can be rewritten to float typed fabs(x),y. ux and uy are then applying a umax, which can be replaced by fmax. The fmax result then has the implied infiniteness, and you can check if that is infinity down here.

The bithacking below that is frexp, and the scaling applied to fx and fy can be done with ldexp from the integer exponent obtained from the frexp

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right yeah, I see. Thank you for the explanation.

I'm still pondering the naming of the frexp and ldexp helpers you propose. It's not fully-fledged frexp or ldexp as far as I can tell. If it is, then our implementations of frexp and ldexp are far too complicated. Are we talking about a 'native' frexp and ldexp? A 'fast' one? That's just going on existing OpenCL terminology, which may or may not confuse things. Should they handle NaNs, Infs, subnormals?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AMDGPU instructions names call them frexp_exp and frexp_mant. It's just the two parts of the frexp, split into 2 instructions.

ldexp is just ldexp, there's nothing special to it

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't tell from the documentation: what do frexp_exp and frexp_mant do with subnormal numbers? For exp, D0.i32 = exponent(S0.f32) - 127 + 1 implies it returns -126, which wouldn't be what frexp returns for the same input. So frexp relies on extra processing of frexp_exp and frexp_mant?

We would want consistent results across all libclc targets for a hypothetical __clc_frexp_exp and __clc_frexp_mant. We'd need to define the various edge cases and subnormal handling.

For targets without native support for either of these operations I worry that using these functions would be less optimal than doing bit manipulation/scaling/etc directly inline.

Might we not just end up in a scenario where AMDGPU in fact wants a custom implementation of __clc_hypot, rather than a generic solution for all targets involving frexp/ldexp?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They behave correctly, not sure what documentation you are referring to. The lowering of llvm.frexp is just the two intrinsic calls

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was looking at V_FREXP_EXP_I32_F32 and V_FREXP_MANT_F32 in the RDNA 3.5 docs.

I was previously looking at the lowering of llvm.frexp in CodeGen tests and saw more than just those two instructions, but now I realise I was only seeing the GFX6 output. The lowering is indeed just those two instructions for other architectures.

My broader point is that I need to align the semantics of these instructions with what other targets would have to do for the equivalent operations. To match the AMDGPU behaviour like-for-like on other targets, such as separating out the two exp/mant operations, having subnormal support, etc., would be far more expensive than the bithacking it's currently doing in hypot to just shift out the mantissa. But if the AMDGPU version of "frexp_mant" does subnormal scaling and other targets don't, then that could cause bugs between platforms as now we can't guarantee the bits that come out of the frexp_mant and frexp_exp helpers.

That's why I suggested it might just be better to have AMDGPU override hypot directly, rather than rely on a "generic" hypot which calls into mant/exp helpers which AMDGPU specialize.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But if the AMDGPU version of "frexp_mant" does subnormal scaling and other targets don't, then that could cause bugs between platforms as now we can't guarantee the bits that come out of the frexp_mant and frexp_exp helpers.

If they don't handle denormals, it's an incorrect implementation of frexp or frexp_mant. The operation is not simply mask out the exponent bits. The clamping to 126 code above is dealing with the denormal cases. That can be folded into a software frexp_mant utility which happens to be one instruction for amdgpu

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

float hypot(float x, float y) {
    float a = __builtin_fabsf(x);
    float b = __builtin_fabsf(y);
    float t = __builtin_fmaxf(a, b);
    int e = __clc_frexp_exp(t) ;
    a = __builtin_ldexpf(a, -e);
    b = __builtin_ldexpf(b, -e);
    float ret = __builtin_ldexpf(__clc_sqrt(__clc_mad(a, a, b*b)), e);
    ret = __builtin_isinf(t) ? PINF_F32 : ret;
    return ret;
}

@arsenm arsenm added the floating-point Floating-point math label Mar 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
floating-point Floating-point math libclc libclc OpenCL library
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants