Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[SelectionDAG] Fix bug related to demanded bits/elts for BITCAST #139085

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

bjope
Copy link
Collaborator

@bjope bjope commented May 8, 2025

When we have a BITCAST and the source type is a vector with smaller elements compared to the destination type, then we need to demand all the source elements that make up the demanded elts for the result when doing recursive calls to SimplifyDemandedBits, SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits. Problem is that those simplifications are allowed to turn non-demanded elements of a vector into POISON, so unless we demand all source elements that make up the result there is a risk that the result would be more poisonous (even for demanded elts) after the simplification.

The patch fixes some bugs in SimplifyMultipleUseDemandedBits and SimplifyDemandedBits for situations when we did not consider the problem described above. Now we make sure that we also demand vector elements that "must not be turned into poison" even if those elements correspond to bits that does not need to be defined according to the DemandedBits mask.

Fixes #138513

When we have a BITCAST and the source type is a vector with smaller
elements compared to the destination type, then we need to demand
all the source elements that make up the demanded elts for the
result when doing recursive calls to SimplifyDemandedBits,
SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits.
Problem is that those simplifications are allowed to turn non-demanded
elements of a vector into POISON, so unless we demand all source
elements that make up the result there is a risk that the result
would be more poisonous (even for demanded elts) after the
simplification.

The patch fixes some bugs in SimplifyMultipleUseDemandedBits and
SimplifyDemandedBits for situations when we did not consider the
problem described above. Now we make sure that we also demand vector
elements that "must not be turned into poison" even if those elements
correspond to bits that does not need to be defined according to
the DemandedBits mask.

Fixes llvm#138513
@llvmbot
Copy link
Member

llvmbot commented May 8, 2025

@llvm/pr-subscribers-llvm-selectiondag
@llvm/pr-subscribers-backend-x86

@llvm/pr-subscribers-backend-amdgpu

Author: Björn Pettersson (bjope)

Changes

When we have a BITCAST and the source type is a vector with smaller elements compared to the destination type, then we need to demand all the source elements that make up the demanded elts for the result when doing recursive calls to SimplifyDemandedBits, SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits. Problem is that those simplifications are allowed to turn non-demanded elements of a vector into POISON, so unless we demand all source elements that make up the result there is a risk that the result would be more poisonous (even for demanded elts) after the simplification.

The patch fixes some bugs in SimplifyMultipleUseDemandedBits and SimplifyDemandedBits for situations when we did not consider the problem described above. Now we make sure that we also demand vector elements that "must not be turned into poison" even if those elements correspond to bits that does not need to be defined according to the DemandedBits mask.

Fixes #138513


Patch is 1.50 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/139085.diff

120 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (+10-12)
  • (modified) llvm/test/CodeGen/AArch64/reduce-or.ll (+5-6)
  • (modified) llvm/test/CodeGen/AArch64/reduce-xor.ll (+5-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll (+7-5)
  • (modified) llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll (+51-47)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll (+4-3)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i1.ll (+324-278)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i16.ll (+60-30)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i8.ll (+487-409)
  • (modified) llvm/test/CodeGen/AMDGPU/load-range-metadata-sign-bits.ll (+3-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad_64_32.ll (+7-6)
  • (modified) llvm/test/CodeGen/AMDGPU/mul_int24.ll (+32-28)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv64.ll (+40-40)
  • (modified) llvm/test/CodeGen/AMDGPU/shift-i128.ll (+9-7)
  • (modified) llvm/test/CodeGen/AMDGPU/srem64.ll (+40-40)
  • (modified) llvm/test/CodeGen/ARM/fpclamptosat_vec.ll (+234-210)
  • (modified) llvm/test/CodeGen/Thumb2/mve-fpclamptosat_vec.ll (+42-38)
  • (modified) llvm/test/CodeGen/Thumb2/mve-gather-ind8-unscaled.ll (-5)
  • (modified) llvm/test/CodeGen/Thumb2/mve-laneinterleaving.ll (+45-41)
  • (modified) llvm/test/CodeGen/Thumb2/mve-pred-ext.ll (+1)
  • (modified) llvm/test/CodeGen/Thumb2/mve-satmul-loops.ll (+118-95)
  • (modified) llvm/test/CodeGen/Thumb2/mve-scatter-ind8-unscaled.ll (+2-7)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vecreduce-addpred.ll (+4-4)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vecreduce-mlapred.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/avx512-intrinsics-upgrade.ll (+32-14)
  • (modified) llvm/test/CodeGen/X86/avx512vl-intrinsics-upgrade.ll (+38-14)
  • (modified) llvm/test/CodeGen/X86/avx512vl-vec-masked-cmp.ll (+140-110)
  • (modified) llvm/test/CodeGen/X86/bitcast-and-setcc-128.ll (+16-12)
  • (modified) llvm/test/CodeGen/X86/bitcast-setcc-128.ll (+7-5)
  • (modified) llvm/test/CodeGen/X86/bitcast-setcc-512.ll (+8-3)
  • (modified) llvm/test/CodeGen/X86/bitcast-vector-bool.ll (+12-3)
  • (modified) llvm/test/CodeGen/X86/buildvec-widen-dotproduct.ll (+37-34)
  • (modified) llvm/test/CodeGen/X86/combine-pmuldq.ll (+69-18)
  • (modified) llvm/test/CodeGen/X86/combine-sdiv.ll (+26-25)
  • (modified) llvm/test/CodeGen/X86/combine-sra.ll (+32-31)
  • (modified) llvm/test/CodeGen/X86/combine-udiv.ll (+4-1)
  • (modified) llvm/test/CodeGen/X86/f16c-intrinsics-fast-isel.ll (+4)
  • (modified) llvm/test/CodeGen/X86/fminimum-fmaximum.ll (+12-12)
  • (modified) llvm/test/CodeGen/X86/fminimumnum-fmaximumnum.ll (+12-12)
  • (modified) llvm/test/CodeGen/X86/fold-int-pow2-with-fmul-or-fdiv.ll (+44-45)
  • (modified) llvm/test/CodeGen/X86/gfni-funnel-shifts.ll (+150-150)
  • (modified) llvm/test/CodeGen/X86/hoist-and-by-const-from-shl-in-eqcmp-zero.ll (+20-28)
  • (modified) llvm/test/CodeGen/X86/known-never-zero.ll (+2-1)
  • (modified) llvm/test/CodeGen/X86/known-pow2.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/known-signbits-shl.ll (+2-1)
  • (modified) llvm/test/CodeGen/X86/known-signbits-vector.ll (+23-7)
  • (modified) llvm/test/CodeGen/X86/masked_store.ll (+13-5)
  • (modified) llvm/test/CodeGen/X86/movmsk-cmp.ll (+84-25)
  • (modified) llvm/test/CodeGen/X86/mulvi32.ll (+8-7)
  • (modified) llvm/test/CodeGen/X86/omit-urem-of-power-of-two-or-zero-when-comparing-with-zero.ll (+14-11)
  • (modified) llvm/test/CodeGen/X86/pmul.ll (+74-66)
  • (modified) llvm/test/CodeGen/X86/pmulh.ll (+4-2)
  • (modified) llvm/test/CodeGen/X86/pr107423.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/pr35918.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/pr41619.ll (+2)
  • (modified) llvm/test/CodeGen/X86/pr42727.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr45563-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr45833.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr77459.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/promote-cmp.ll (+22-19)
  • (modified) llvm/test/CodeGen/X86/promote-vec3.ll (+6-3)
  • (modified) llvm/test/CodeGen/X86/psubus.ll (+132-123)
  • (modified) llvm/test/CodeGen/X86/rotate-extract-vector.ll (+14-26)
  • (modified) llvm/test/CodeGen/X86/sadd_sat_vec.ll (+114-93)
  • (modified) llvm/test/CodeGen/X86/sat-add.ll (+12-9)
  • (modified) llvm/test/CodeGen/X86/sdiv-exact.ll (+17-13)
  • (modified) llvm/test/CodeGen/X86/sdiv_fix_sat.ll (+131-130)
  • (modified) llvm/test/CodeGen/X86/shrink_vmul.ll (+2)
  • (modified) llvm/test/CodeGen/X86/srem-seteq-vec-nonsplat.ll (+11-9)
  • (modified) llvm/test/CodeGen/X86/sshl_sat_vec.ll (+66-68)
  • (modified) llvm/test/CodeGen/X86/ssub_sat_vec.ll (+151-123)
  • (modified) llvm/test/CodeGen/X86/test-shrink-bug.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/ucmp.ll (+221-214)
  • (modified) llvm/test/CodeGen/X86/udiv-exact.ll (+17-13)
  • (modified) llvm/test/CodeGen/X86/urem-seteq-illegal-types.ll (+3-5)
  • (modified) llvm/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll (+172-122)
  • (modified) llvm/test/CodeGen/X86/ushl_sat_vec.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vec-strict-inttofp-256.ll (+43-37)
  • (modified) llvm/test/CodeGen/X86/vec_int_to_fp.ll (+159-126)
  • (modified) llvm/test/CodeGen/X86/vec_minmax_sint.ll (+92-66)
  • (modified) llvm/test/CodeGen/X86/vec_minmax_uint.ll (+92-66)
  • (modified) llvm/test/CodeGen/X86/vec_smulo.ll (+143-139)
  • (modified) llvm/test/CodeGen/X86/vec_umulo.ll (+99-89)
  • (modified) llvm/test/CodeGen/X86/vector-compare-all_of.ll (+21-15)
  • (modified) llvm/test/CodeGen/X86/vector-compare-any_of.ll (+21-15)
  • (modified) llvm/test/CodeGen/X86/vector-constrained-fp-intrinsics.ll (+23-19)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-128.ll (+159-139)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-256.ll (+48-47)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-128.ll (+56-37)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-256.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-128.ll (+33-27)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-256.ll (+20-19)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-128.ll (+33-28)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-256.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i32-stride-5.ll (+9-9)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i32-stride-7.ll (+3214-3353)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i8-stride-8.ll (+222-214)
  • (modified) llvm/test/CodeGen/X86/vector-mul.ll (+34-34)
  • (modified) llvm/test/CodeGen/X86/vector-pcmp.ll (+3-2)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-fmaximum.ll (+82-81)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-mul.ll (+112-56)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-smax.ll (+99-69)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-smin.ll (+96-65)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-umax.ll (+99-69)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-umin.ll (+96-65)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-128.ll (+56-37)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-256.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-128.ll (+28-24)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-256.ll (+22-20)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-sub128.ll (+56-48)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-256-v16.ll (+16-16)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining-avx.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining-ssse3.ll (+3-6)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining.ll (+6-3)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-packus.ll (+945-806)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-ssat.ll (+734-609)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-usat.ll (+398-368)
  • (modified) llvm/test/CodeGen/X86/vector_splat-const-shift-of-constmasked.ll (+4-5)
  • (modified) llvm/test/CodeGen/X86/vselect.ll (+10-20)
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index ba34c72156228..d5697b6031537 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -711,18 +711,17 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
       unsigned Scale = NumDstEltBits / NumSrcEltBits;
       unsigned NumSrcElts = SrcVT.getVectorNumElements();
       APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
-      APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
       for (unsigned i = 0; i != Scale; ++i) {
         unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
         unsigned BitOffset = EltOffset * NumSrcEltBits;
         APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
-        if (!Sub.isZero()) {
+        if (!Sub.isZero())
           DemandedSrcBits |= Sub;
-          for (unsigned j = 0; j != NumElts; ++j)
-            if (DemandedElts[j])
-              DemandedSrcElts.setBit((j * Scale) + i);
-        }
       }
+      // Need to demand all smaller source elements that maps to a demanded
+      // destination element, since recursive calls below may turn not demanded
+      // elements into poison.
+      APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
 
       if (SDValue V = SimplifyMultipleUseDemandedBits(
               Src, DemandedSrcBits, DemandedSrcElts, DAG, Depth + 1))
@@ -2755,18 +2754,17 @@ bool TargetLowering::SimplifyDemandedBits(
       unsigned Scale = BitWidth / NumSrcEltBits;
       unsigned NumSrcElts = SrcVT.getVectorNumElements();
       APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
-      APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
       for (unsigned i = 0; i != Scale; ++i) {
         unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
         unsigned BitOffset = EltOffset * NumSrcEltBits;
         APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
-        if (!Sub.isZero()) {
+        if (!Sub.isZero())
           DemandedSrcBits |= Sub;
-          for (unsigned j = 0; j != NumElts; ++j)
-            if (DemandedElts[j])
-              DemandedSrcElts.setBit((j * Scale) + i);
-        }
       }
+      // Need to demand all smaller source elements that maps to a demanded
+      // destination element, since recursive calls below may turn not demanded
+      // elements into poison.
+      APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
 
       APInt KnownSrcUndef, KnownSrcZero;
       if (SimplifyDemandedVectorElts(Src, DemandedSrcElts, KnownSrcUndef,
diff --git a/llvm/test/CodeGen/AArch64/reduce-or.ll b/llvm/test/CodeGen/AArch64/reduce-or.ll
index aac31ce8b71b7..f5291f5debb40 100644
--- a/llvm/test/CodeGen/AArch64/reduce-or.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-or.ll
@@ -218,13 +218,12 @@ define i8 @test_redor_v3i8(<3 x i8> %a) {
 ; CHECK-NEXT:    movi v0.2d, #0000000000000000
 ; CHECK-NEXT:    mov v0.h[0], w0
 ; CHECK-NEXT:    mov v0.h[1], w1
-; CHECK-NEXT:    fmov x8, d0
 ; CHECK-NEXT:    mov v0.h[2], w2
-; CHECK-NEXT:    fmov x9, d0
-; CHECK-NEXT:    lsr x10, x9, #32
-; CHECK-NEXT:    lsr x9, x9, #16
-; CHECK-NEXT:    orr w8, w8, w10
-; CHECK-NEXT:    orr w0, w8, w9
+; CHECK-NEXT:    fmov x8, d0
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    lsr x10, x8, #16
+; CHECK-NEXT:    orr w8, w8, w9
+; CHECK-NEXT:    orr w0, w8, w10
 ; CHECK-NEXT:    ret
 ;
 ; GISEL-LABEL: test_redor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/reduce-xor.ll b/llvm/test/CodeGen/AArch64/reduce-xor.ll
index 9a00172f94763..df8485b91468f 100644
--- a/llvm/test/CodeGen/AArch64/reduce-xor.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-xor.ll
@@ -207,13 +207,12 @@ define i8 @test_redxor_v3i8(<3 x i8> %a) {
 ; CHECK-NEXT:    movi v0.2d, #0000000000000000
 ; CHECK-NEXT:    mov v0.h[0], w0
 ; CHECK-NEXT:    mov v0.h[1], w1
-; CHECK-NEXT:    fmov x8, d0
 ; CHECK-NEXT:    mov v0.h[2], w2
-; CHECK-NEXT:    fmov x9, d0
-; CHECK-NEXT:    lsr x10, x9, #32
-; CHECK-NEXT:    lsr x9, x9, #16
-; CHECK-NEXT:    eor w8, w8, w10
-; CHECK-NEXT:    eor w0, w8, w9
+; CHECK-NEXT:    fmov x8, d0
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    lsr x10, x8, #16
+; CHECK-NEXT:    eor w8, w8, w9
+; CHECK-NEXT:    eor w0, w8, w10
 ; CHECK-NEXT:    ret
 ;
 ; GISEL-LABEL: test_redxor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
index 7fa416e0dbcd5..e21ae88d52b47 100644
--- a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
+++ b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
@@ -125,12 +125,14 @@ define i8 @test_v9i8(<9 x i8> %a) nounwind {
 define i32 @test_v3i32(<3 x i32> %a) nounwind {
 ; CHECK-LABEL: test_v3i32:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT:    mov v1.16b, v0.16b
+; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
+; CHECK-NEXT:    mov v1.s[3], w8
+; CHECK-NEXT:    ext v1.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT:    and v0.8b, v0.8b, v1.8b
 ; CHECK-NEXT:    fmov x8, d0
-; CHECK-NEXT:    lsr x8, x8, #32
-; CHECK-NEXT:    and v1.8b, v0.8b, v1.8b
-; CHECK-NEXT:    fmov x9, d1
-; CHECK-NEXT:    and w0, w9, w8
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    and w0, w8, w9
 ; CHECK-NEXT:    ret
   %b = call i32 @llvm.vector.reduce.and.v3i32(<3 x i32> %a)
   ret i32 %b
diff --git a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
index d1090738e24a6..07fc5f30f23a3 100644
--- a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
+++ b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
@@ -1904,69 +1904,74 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; VI-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-NEXT:    v_add_u32_e32 v2, vcc, 5, v0
 ; VI-NEXT:    v_addc_u32_e32 v3, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v10, v[2:3]
-; VI-NEXT:    v_add_u32_e32 v2, vcc, 6, v0
-; VI-NEXT:    v_addc_u32_e32 v3, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v4, vcc, 1, v0
+; VI-NEXT:    v_add_u32_e32 v4, vcc, 4, v0
 ; VI-NEXT:    v_addc_u32_e32 v5, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v6, vcc, 2, v0
+; VI-NEXT:    v_add_u32_e32 v6, vcc, 1, v0
 ; VI-NEXT:    v_addc_u32_e32 v7, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v8, vcc, 3, v0
+; VI-NEXT:    v_add_u32_e32 v8, vcc, 2, v0
 ; VI-NEXT:    v_addc_u32_e32 v9, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v6, v[6:7]
-; VI-NEXT:    flat_load_ubyte v7, v[8:9]
-; VI-NEXT:    flat_load_ubyte v8, v[2:3]
-; VI-NEXT:    flat_load_ubyte v2, v[0:1]
+; VI-NEXT:    v_add_u32_e32 v10, vcc, 3, v0
+; VI-NEXT:    v_addc_u32_e32 v11, vcc, 0, v1, vcc
+; VI-NEXT:    v_add_u32_e32 v12, vcc, 6, v0
+; VI-NEXT:    v_addc_u32_e32 v13, vcc, 0, v1, vcc
+; VI-NEXT:    flat_load_ubyte v2, v[2:3]
 ; VI-NEXT:    flat_load_ubyte v4, v[4:5]
-; VI-NEXT:    v_add_u32_e32 v0, vcc, 4, v0
-; VI-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v9, v[0:1]
+; VI-NEXT:    flat_load_ubyte v5, v[6:7]
+; VI-NEXT:    flat_load_ubyte v7, v[8:9]
+; VI-NEXT:    flat_load_ubyte v3, v[10:11]
+; VI-NEXT:    flat_load_ubyte v6, v[12:13]
+; VI-NEXT:    flat_load_ubyte v0, v[0:1]
+; VI-NEXT:    v_mov_b32_e32 v8, 0x3020504
 ; VI-NEXT:    s_mov_b32 s3, 0xf000
 ; VI-NEXT:    s_mov_b32 s2, -1
 ; VI-NEXT:    s_waitcnt vmcnt(6)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v5, v10
+; VI-NEXT:    v_lshlrev_b32_e32 v9, 8, v2
+; VI-NEXT:    s_waitcnt vmcnt(5)
+; VI-NEXT:    v_or_b32_e32 v4, v9, v4
 ; VI-NEXT:    s_waitcnt vmcnt(4)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v3, v7
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; VI-NEXT:    s_waitcnt vmcnt(3)
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v2, v7
 ; VI-NEXT:    s_waitcnt vmcnt(2)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v0, v2
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v2, v6
-; VI-NEXT:    s_waitcnt vmcnt(1)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v1, v4
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v6, v8
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v3, v3
+; VI-NEXT:    v_perm_b32 v4, v4, s0, v8
 ; VI-NEXT:    s_waitcnt vmcnt(0)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v4, v9
-; VI-NEXT:    buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v6, v6
+; VI-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; VI-NEXT:    buffer_store_dwordx4 v[0:3], off, s[0:3], 0
+; VI-NEXT:    buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
 ; VI-NEXT:    s_endpgm
 ;
 ; GFX10-LABEL: load_v7i8_to_v7f32:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0x24
 ; GFX10-NEXT:    v_lshlrev_b32_e32 v0, 3, v0
-; GFX10-NEXT:    v_mov_b32_e32 v8, 0
+; GFX10-NEXT:    v_mov_b32_e32 v4, 0
+; GFX10-NEXT:    v_mov_b32_e32 v7, 0
 ; GFX10-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX10-NEXT:    s_clause 0x5
-; GFX10-NEXT:    global_load_ubyte v4, v0, s[2:3] offset:6
+; GFX10-NEXT:    global_load_ubyte v5, v0, s[2:3] offset:6
 ; GFX10-NEXT:    global_load_ubyte v1, v0, s[2:3] offset:3
 ; GFX10-NEXT:    global_load_ubyte v2, v0, s[2:3] offset:2
-; GFX10-NEXT:    global_load_ubyte v5, v0, s[2:3] offset:1
-; GFX10-NEXT:    global_load_short_d16 v7, v0, s[2:3] offset:4
+; GFX10-NEXT:    global_load_ubyte v6, v0, s[2:3] offset:1
+; GFX10-NEXT:    global_load_short_d16 v4, v0, s[2:3] offset:4
 ; GFX10-NEXT:    global_load_ubyte v0, v0, s[2:3]
-; GFX10-NEXT:    s_waitcnt vmcnt(5)
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v6, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(4)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v3, v1
 ; GFX10-NEXT:    s_waitcnt vmcnt(3)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v2, v2
 ; GFX10-NEXT:    s_waitcnt vmcnt(2)
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v1, v6
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v6, v5
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
-; GFX10-NEXT:    v_cvt_f32_ubyte1_e32 v5, v7
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v4, v7
+; GFX10-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
-; GFX10-NEXT:    global_store_dwordx3 v8, v[4:6], s[0:1] offset:16
-; GFX10-NEXT:    global_store_dwordx4 v8, v[0:3], s[0:1]
+; GFX10-NEXT:    global_store_dwordx3 v7, v[4:6], s[0:1] offset:16
+; GFX10-NEXT:    global_store_dwordx4 v7, v[0:3], s[0:1]
 ; GFX10-NEXT:    s_endpgm
 ;
 ; GFX9-LABEL: load_v7i8_to_v7f32:
@@ -1984,8 +1989,8 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; GFX9-NEXT:    s_waitcnt vmcnt(5)
 ; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v6, v1
 ; GFX9-NEXT:    s_waitcnt vmcnt(4)
-; GFX9-NEXT:    v_cvt_f32_ubyte1_e32 v5, v2
-; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v4, v2
+; GFX9-NEXT:    v_cvt_f32_ubyte1_sdwa v5, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
+; GFX9-NEXT:    v_cvt_f32_ubyte0_sdwa v4, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
 ; GFX9-NEXT:    s_waitcnt vmcnt(3)
 ; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v3, v3
 ; GFX9-NEXT:    s_waitcnt vmcnt(2)
@@ -2001,34 +2006,33 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; GFX11-LABEL: load_v7i8_to_v7f32:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_load_b128 s[0:3], s[4:5], 0x24
-; GFX11-NEXT:    v_and_b32_e32 v0, 0x3ff, v0
-; GFX11-NEXT:    v_mov_b32_e32 v8, 0
+; GFX11-NEXT:    v_dual_mov_b32 v7, 0 :: v_dual_and_b32 v0, 0x3ff, v0
+; GFX11-NEXT:    v_mov_b32_e32 v4, 0
 ; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_2)
 ; GFX11-NEXT:    v_lshlrev_b32_e32 v0, 3, v0
 ; GFX11-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX11-NEXT:    s_clause 0x5
-; GFX11-NEXT:    global_load_u8 v4, v0, s[2:3] offset:6
+; GFX11-NEXT:    global_load_u8 v5, v0, s[2:3] offset:6
 ; GFX11-NEXT:    global_load_u8 v1, v0, s[2:3] offset:3
 ; GFX11-NEXT:    global_load_u8 v2, v0, s[2:3] offset:2
-; GFX11-NEXT:    global_load_u8 v5, v0, s[2:3] offset:1
-; GFX11-NEXT:    global_load_d16_b16 v7, v0, s[2:3] offset:4
+; GFX11-NEXT:    global_load_u8 v6, v0, s[2:3] offset:1
+; GFX11-NEXT:    global_load_d16_b16 v4, v0, s[2:3] offset:4
 ; GFX11-NEXT:    global_load_u8 v0, v0, s[2:3]
-; GFX11-NEXT:    s_waitcnt vmcnt(5)
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v6, v4
 ; GFX11-NEXT:    s_waitcnt vmcnt(4)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v3, v1
 ; GFX11-NEXT:    s_waitcnt vmcnt(3)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v2, v2
 ; GFX11-NEXT:    s_waitcnt vmcnt(2)
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v1, v6
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v6, v5
 ; GFX11-NEXT:    s_waitcnt vmcnt(1)
-; GFX11-NEXT:    v_cvt_f32_ubyte1_e32 v5, v7
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v4, v7
+; GFX11-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; GFX11-NEXT:    s_waitcnt vmcnt(0)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
 ; GFX11-NEXT:    s_clause 0x1
-; GFX11-NEXT:    global_store_b96 v8, v[4:6], s[0:1] offset:16
-; GFX11-NEXT:    global_store_b128 v8, v[0:3], s[0:1]
+; GFX11-NEXT:    global_store_b96 v7, v[4:6], s[0:1] offset:16
+; GFX11-NEXT:    global_store_b128 v7, v[0:3], s[0:1]
 ; GFX11-NEXT:    s_endpgm
   %tid = call i32 @llvm.amdgcn.workitem.id.x()
   %gep = getelementptr <7 x i8>, ptr addrspace(1) %in, i32 %tid
diff --git a/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll b/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
index 933c6506d0270..c29039b86e82b 100644
--- a/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
+++ b/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
@@ -150,9 +150,10 @@ define i32 @mul_one_bit_hi_hi_u32_lshr_ashr(i32 %arg, i32 %arg1, ptr %arg2) {
 ; CHECK-LABEL: mul_one_bit_hi_hi_u32_lshr_ashr:
 ; CHECK:       ; %bb.0: ; %bb
 ; CHECK-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; CHECK-NEXT:    v_mul_hi_u32 v4, v1, v0
-; CHECK-NEXT:    v_ashrrev_i64 v[0:1], 33, v[3:4]
-; CHECK-NEXT:    flat_store_dword v[2:3], v4
+; CHECK-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v1, v0, 0
+; CHECK-NEXT:    v_mul_hi_u32 v6, v1, v0
+; CHECK-NEXT:    v_ashrrev_i64 v[0:1], 33, v[4:5]
+; CHECK-NEXT:    flat_store_dword v[2:3], v6
 ; CHECK-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
 ; CHECK-NEXT:    s_setpc_b64 s[30:31]
 bb:
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index a9240eff8e691..6b7f648f65a45 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8340,191 +8340,216 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0x9
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX6-NEXT:    s_load_dwordx2 s[4:5], s[2:3], 0x0
+; GFX6-NEXT:    s_mov_b32 s7, 0
 ; GFX6-NEXT:    s_mov_b32 s3, 0xf000
-; GFX6-NEXT:    s_mov_b32 s2, -1
+; GFX6-NEXT:    s_mov_b32 s9, s7
+; GFX6-NEXT:    s_mov_b32 s11, s7
+; GFX6-NEXT:    s_mov_b32 s13, s7
+; GFX6-NEXT:    s_mov_b32 s17, s7
+; GFX6-NEXT:    s_mov_b32 s19, s7
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 30
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 28
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 29
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 26
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 24
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 25
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 22
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 23
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 20
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 21
-; GFX6-NEXT:    s_lshr_b32 s14, s5, 18
-; GFX6-NEXT:    s_lshr_b32 s16, s5, 19
-; GFX6-NEXT:    s_lshr_b32 s10, s5, 16
-; GFX6-NEXT:    s_lshr_b32 s12, s5, 17
-; GFX6-NEXT:    s_lshr_b32 s6, s5, 14
-; GFX6-NEXT:    s_lshr_b32 s8, s5, 15
-; GFX6-NEXT:    s_mov_b32 s40, s5
-; GFX6-NEXT:    s_ashr_i32 s7, s5, 31
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[40:41], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v4, s7
-; GFX6-NEXT:    s_lshr_b32 s40, s5, 12
+; GFX6-NEXT:    s_lshr_b32 s6, s5, 30
+; GFX6-NEXT:    s_lshr_b32 s8, s5, 28
+; GFX6-NEXT:    s_lshr_b32 s10, s5, 29
+; GFX6-NEXT:    s_lshr_b32 s12, s5, 26
+; GFX6-NEXT:    s_lshr_b32 s16, s5, 27
+; GFX6-NEXT:    s_mov_b32 s18, s5
+; GFX6-NEXT:    s_bfe_i64 s[14:15], s[4:5], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[18:19], 0x10000
+; GFX6-NEXT:    s_ashr_i32 s18, s5, 31
+; GFX6-NEXT:    s_bfe_i64 s[28:29], s[16:17], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[36:37], s[12:13], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[38:39], s[10:11], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[40:41], s[8:9], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[42:43], s[6:7], 0x10000
+; GFX6-NEXT:    s_mov_b32 s2, -1
+; GFX6-NEXT:    s_mov_b32 s31, s7
+; GFX6-NEXT:    s_mov_b32 s35, s7
+; GFX6-NEXT:    s_mov_b32 s25, s7
+; GFX6-NEXT:    s_mov_b32 s27, s7
+; GFX6-NEXT:    s_mov_b32 s21, s7
+; GFX6-NEXT:    s_mov_b32 s23, s7
+; GFX6-NEXT:    v_mov_b32_e32 v4, s18
 ; GFX6-NEXT:    v_mov_b32_e32 v0, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v1, s45
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[42:43], s[42:43], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v6, s44
-; GFX6-NEXT:    v_mov_b32_e32 v7, s45
-; GFX6-NEXT:    s_lshr_b32 s44, s5, 13
+; GFX6-NEXT:    s_mov_b32 s45, s7
+; GFX6-NEXT:    v_mov_b32_e32 v6, s14
+; GFX6-NEXT:    v_mov_b32_e32 v7, s15
+; GFX6-NEXT:    s_mov_b32 s47, s7
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s42
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s43
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 10
-; GFX6-NEXT:    s_bfe_i64 s[36:37], s[36:37], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[38:39], s[38:39], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v8, s36
-; GFX6-NEXT:    v_mov_b32_e32 v9, s37
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 11
+; GFX6-NEXT:    s_mov_b32 s43, s7
+; GFX6-NEXT:    v_mov_b32_e32 v8, s40
+; GFX6-NEXT:    v_mov_b32_e32 v9, s41
+; GFX6-NEXT:    s_mov_b32 s41, s7
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s38
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s39
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 8
-; GFX6-NEXT:    s_bfe_i64 s[30:31], s[30:31], 0x10000
+; GFX6-NEXT:    s_mov_b32 s39, s7
+; GFX6-NEXT:    v_mov_b32_e32 v12, s36
+; GFX6-NEXT:    v_mov_b32_e32 v13, s37
+; GFX6-NEXT:    s_mov_b32 s15, s7
+; GFX6-NEXT:    v_mov_b32_e32 v14, s28
+; GFX6-NEXT:    v_mov_b32_e32 v15, s29
+; GFX6-NEXT:    s_mov_b32 s37, s7
+; GFX6-NEXT:    s_lshr_b32 s30, s5, 24
+; GFX6-NEXT:    s_lshr_b32 s34, s5, 25
 ; GFX6-NEXT:    s_bfe_i64 s[34:35], s[34:35], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v12, s30
-; GFX6-NEXT:    v_mov_b32_e32 v13, s31
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 9
-; GFX6-NEXT:    v_mov_b32_e32 v14, s34
-; GFX6-NEXT:    v_mov_b32_e32 v15, s35
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 6
-; GFX6-NEXT:    s_bfe_i64 s[28:29], s[28:29], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v5, s7
+; GFX6-NEXT:    s_bfe_i64 s[28:29], s[30:31], 0x10000
+; GFX6-NEXT:    v_mov_b32_e32 v5, s18
 ; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:496
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
-; GFX6-NEXT:    v_mov_b32_e32 v2, s26
-; GFX6-NEXT:    v_mov_b32_e32 v3, s27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 7
-; GFX6-NEXT:    v_mov_b32_e32 v4, s28
-; GFX6-NEXT:    v_mov_b32_e32 v5, s29
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 4
+; GFX6-NEXT:    v_mov_b32_e32 v2, s28
+; GFX6-NEXT:    v_mov_b32_e32 v3, s29
+; GFX6-NEXT:    s_mov_b32 s29, s7
+; GFX6-NEXT:    v_mov_b32_e32 v4, s34
+; GFX6-NEXT:    v_mov_b32_e32 v5, s35
+; GFX6-NEXT:    s_lshr_b32 s24, s5, 22
+; GFX6-NEXT:    s_lshr_b32 s26, s5, 23
+; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[24:25], s[24:25], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
 ; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
-; GFX6-NEXT:    v_mov_b32_e32 v8, s22
-; GFX6-NEXT:    v_mov_b32_e32 v9, s23
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 5
-; GFX6-NEXT:    v_mov_b32_e32 v10, s24
-; GFX6-NEXT:    v_mov_b32_e32 v11, s25
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 2
+; GFX6-NEXT:    v_mov_b32_e32 v8, s24
+; GFX6-NEXT:    v_mov_b32_e32 v9, s25
+; GFX6-NEXT:    s_mov_b32 s25, s7
+; GFX6-NEXT:    v_mov_b32_e32 v10, s26
+; GFX6-NEXT:    v_mov_b32_e32 v11, s27
+; GFX6-NEXT:    s_mov_b32 s27, s7
+; GFX6-NEXT:    s_lshr_b32 s20, s5, 20
+; GFX6-NEXT:    s_lshr_b32 s22, s5, 21
+; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[20:21], s[20:21], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[18:19], s[18:19], 0x10000
 ; GFX6-...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented May 8, 2025

@llvm/pr-subscribers-backend-aarch64

Author: Björn Pettersson (bjope)

Changes

When we have a BITCAST and the source type is a vector with smaller elements compared to the destination type, then we need to demand all the source elements that make up the demanded elts for the result when doing recursive calls to SimplifyDemandedBits, SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits. Problem is that those simplifications are allowed to turn non-demanded elements of a vector into POISON, so unless we demand all source elements that make up the result there is a risk that the result would be more poisonous (even for demanded elts) after the simplification.

The patch fixes some bugs in SimplifyMultipleUseDemandedBits and SimplifyDemandedBits for situations when we did not consider the problem described above. Now we make sure that we also demand vector elements that "must not be turned into poison" even if those elements correspond to bits that does not need to be defined according to the DemandedBits mask.

Fixes #138513


Patch is 1.50 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/139085.diff

120 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (+10-12)
  • (modified) llvm/test/CodeGen/AArch64/reduce-or.ll (+5-6)
  • (modified) llvm/test/CodeGen/AArch64/reduce-xor.ll (+5-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll (+7-5)
  • (modified) llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll (+51-47)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll (+4-3)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i1.ll (+324-278)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i16.ll (+60-30)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i8.ll (+487-409)
  • (modified) llvm/test/CodeGen/AMDGPU/load-range-metadata-sign-bits.ll (+3-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad_64_32.ll (+7-6)
  • (modified) llvm/test/CodeGen/AMDGPU/mul_int24.ll (+32-28)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv64.ll (+40-40)
  • (modified) llvm/test/CodeGen/AMDGPU/shift-i128.ll (+9-7)
  • (modified) llvm/test/CodeGen/AMDGPU/srem64.ll (+40-40)
  • (modified) llvm/test/CodeGen/ARM/fpclamptosat_vec.ll (+234-210)
  • (modified) llvm/test/CodeGen/Thumb2/mve-fpclamptosat_vec.ll (+42-38)
  • (modified) llvm/test/CodeGen/Thumb2/mve-gather-ind8-unscaled.ll (-5)
  • (modified) llvm/test/CodeGen/Thumb2/mve-laneinterleaving.ll (+45-41)
  • (modified) llvm/test/CodeGen/Thumb2/mve-pred-ext.ll (+1)
  • (modified) llvm/test/CodeGen/Thumb2/mve-satmul-loops.ll (+118-95)
  • (modified) llvm/test/CodeGen/Thumb2/mve-scatter-ind8-unscaled.ll (+2-7)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vecreduce-addpred.ll (+4-4)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vecreduce-mlapred.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/avx512-intrinsics-upgrade.ll (+32-14)
  • (modified) llvm/test/CodeGen/X86/avx512vl-intrinsics-upgrade.ll (+38-14)
  • (modified) llvm/test/CodeGen/X86/avx512vl-vec-masked-cmp.ll (+140-110)
  • (modified) llvm/test/CodeGen/X86/bitcast-and-setcc-128.ll (+16-12)
  • (modified) llvm/test/CodeGen/X86/bitcast-setcc-128.ll (+7-5)
  • (modified) llvm/test/CodeGen/X86/bitcast-setcc-512.ll (+8-3)
  • (modified) llvm/test/CodeGen/X86/bitcast-vector-bool.ll (+12-3)
  • (modified) llvm/test/CodeGen/X86/buildvec-widen-dotproduct.ll (+37-34)
  • (modified) llvm/test/CodeGen/X86/combine-pmuldq.ll (+69-18)
  • (modified) llvm/test/CodeGen/X86/combine-sdiv.ll (+26-25)
  • (modified) llvm/test/CodeGen/X86/combine-sra.ll (+32-31)
  • (modified) llvm/test/CodeGen/X86/combine-udiv.ll (+4-1)
  • (modified) llvm/test/CodeGen/X86/f16c-intrinsics-fast-isel.ll (+4)
  • (modified) llvm/test/CodeGen/X86/fminimum-fmaximum.ll (+12-12)
  • (modified) llvm/test/CodeGen/X86/fminimumnum-fmaximumnum.ll (+12-12)
  • (modified) llvm/test/CodeGen/X86/fold-int-pow2-with-fmul-or-fdiv.ll (+44-45)
  • (modified) llvm/test/CodeGen/X86/gfni-funnel-shifts.ll (+150-150)
  • (modified) llvm/test/CodeGen/X86/hoist-and-by-const-from-shl-in-eqcmp-zero.ll (+20-28)
  • (modified) llvm/test/CodeGen/X86/known-never-zero.ll (+2-1)
  • (modified) llvm/test/CodeGen/X86/known-pow2.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/known-signbits-shl.ll (+2-1)
  • (modified) llvm/test/CodeGen/X86/known-signbits-vector.ll (+23-7)
  • (modified) llvm/test/CodeGen/X86/masked_store.ll (+13-5)
  • (modified) llvm/test/CodeGen/X86/movmsk-cmp.ll (+84-25)
  • (modified) llvm/test/CodeGen/X86/mulvi32.ll (+8-7)
  • (modified) llvm/test/CodeGen/X86/omit-urem-of-power-of-two-or-zero-when-comparing-with-zero.ll (+14-11)
  • (modified) llvm/test/CodeGen/X86/pmul.ll (+74-66)
  • (modified) llvm/test/CodeGen/X86/pmulh.ll (+4-2)
  • (modified) llvm/test/CodeGen/X86/pr107423.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/pr35918.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/pr41619.ll (+2)
  • (modified) llvm/test/CodeGen/X86/pr42727.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr45563-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr45833.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr77459.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/promote-cmp.ll (+22-19)
  • (modified) llvm/test/CodeGen/X86/promote-vec3.ll (+6-3)
  • (modified) llvm/test/CodeGen/X86/psubus.ll (+132-123)
  • (modified) llvm/test/CodeGen/X86/rotate-extract-vector.ll (+14-26)
  • (modified) llvm/test/CodeGen/X86/sadd_sat_vec.ll (+114-93)
  • (modified) llvm/test/CodeGen/X86/sat-add.ll (+12-9)
  • (modified) llvm/test/CodeGen/X86/sdiv-exact.ll (+17-13)
  • (modified) llvm/test/CodeGen/X86/sdiv_fix_sat.ll (+131-130)
  • (modified) llvm/test/CodeGen/X86/shrink_vmul.ll (+2)
  • (modified) llvm/test/CodeGen/X86/srem-seteq-vec-nonsplat.ll (+11-9)
  • (modified) llvm/test/CodeGen/X86/sshl_sat_vec.ll (+66-68)
  • (modified) llvm/test/CodeGen/X86/ssub_sat_vec.ll (+151-123)
  • (modified) llvm/test/CodeGen/X86/test-shrink-bug.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/ucmp.ll (+221-214)
  • (modified) llvm/test/CodeGen/X86/udiv-exact.ll (+17-13)
  • (modified) llvm/test/CodeGen/X86/urem-seteq-illegal-types.ll (+3-5)
  • (modified) llvm/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll (+172-122)
  • (modified) llvm/test/CodeGen/X86/ushl_sat_vec.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vec-strict-inttofp-256.ll (+43-37)
  • (modified) llvm/test/CodeGen/X86/vec_int_to_fp.ll (+159-126)
  • (modified) llvm/test/CodeGen/X86/vec_minmax_sint.ll (+92-66)
  • (modified) llvm/test/CodeGen/X86/vec_minmax_uint.ll (+92-66)
  • (modified) llvm/test/CodeGen/X86/vec_smulo.ll (+143-139)
  • (modified) llvm/test/CodeGen/X86/vec_umulo.ll (+99-89)
  • (modified) llvm/test/CodeGen/X86/vector-compare-all_of.ll (+21-15)
  • (modified) llvm/test/CodeGen/X86/vector-compare-any_of.ll (+21-15)
  • (modified) llvm/test/CodeGen/X86/vector-constrained-fp-intrinsics.ll (+23-19)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-128.ll (+159-139)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-256.ll (+48-47)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-128.ll (+56-37)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-256.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-128.ll (+33-27)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-256.ll (+20-19)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-128.ll (+33-28)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-256.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i32-stride-5.ll (+9-9)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i32-stride-7.ll (+3214-3353)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i8-stride-8.ll (+222-214)
  • (modified) llvm/test/CodeGen/X86/vector-mul.ll (+34-34)
  • (modified) llvm/test/CodeGen/X86/vector-pcmp.ll (+3-2)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-fmaximum.ll (+82-81)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-mul.ll (+112-56)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-smax.ll (+99-69)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-smin.ll (+96-65)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-umax.ll (+99-69)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-umin.ll (+96-65)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-128.ll (+56-37)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-256.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-128.ll (+28-24)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-256.ll (+22-20)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-sub128.ll (+56-48)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-256-v16.ll (+16-16)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining-avx.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining-ssse3.ll (+3-6)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining.ll (+6-3)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-packus.ll (+945-806)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-ssat.ll (+734-609)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-usat.ll (+398-368)
  • (modified) llvm/test/CodeGen/X86/vector_splat-const-shift-of-constmasked.ll (+4-5)
  • (modified) llvm/test/CodeGen/X86/vselect.ll (+10-20)
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index ba34c72156228..d5697b6031537 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -711,18 +711,17 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
       unsigned Scale = NumDstEltBits / NumSrcEltBits;
       unsigned NumSrcElts = SrcVT.getVectorNumElements();
       APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
-      APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
       for (unsigned i = 0; i != Scale; ++i) {
         unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
         unsigned BitOffset = EltOffset * NumSrcEltBits;
         APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
-        if (!Sub.isZero()) {
+        if (!Sub.isZero())
           DemandedSrcBits |= Sub;
-          for (unsigned j = 0; j != NumElts; ++j)
-            if (DemandedElts[j])
-              DemandedSrcElts.setBit((j * Scale) + i);
-        }
       }
+      // Need to demand all smaller source elements that maps to a demanded
+      // destination element, since recursive calls below may turn not demanded
+      // elements into poison.
+      APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
 
       if (SDValue V = SimplifyMultipleUseDemandedBits(
               Src, DemandedSrcBits, DemandedSrcElts, DAG, Depth + 1))
@@ -2755,18 +2754,17 @@ bool TargetLowering::SimplifyDemandedBits(
       unsigned Scale = BitWidth / NumSrcEltBits;
       unsigned NumSrcElts = SrcVT.getVectorNumElements();
       APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
-      APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
       for (unsigned i = 0; i != Scale; ++i) {
         unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
         unsigned BitOffset = EltOffset * NumSrcEltBits;
         APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
-        if (!Sub.isZero()) {
+        if (!Sub.isZero())
           DemandedSrcBits |= Sub;
-          for (unsigned j = 0; j != NumElts; ++j)
-            if (DemandedElts[j])
-              DemandedSrcElts.setBit((j * Scale) + i);
-        }
       }
+      // Need to demand all smaller source elements that maps to a demanded
+      // destination element, since recursive calls below may turn not demanded
+      // elements into poison.
+      APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
 
       APInt KnownSrcUndef, KnownSrcZero;
       if (SimplifyDemandedVectorElts(Src, DemandedSrcElts, KnownSrcUndef,
diff --git a/llvm/test/CodeGen/AArch64/reduce-or.ll b/llvm/test/CodeGen/AArch64/reduce-or.ll
index aac31ce8b71b7..f5291f5debb40 100644
--- a/llvm/test/CodeGen/AArch64/reduce-or.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-or.ll
@@ -218,13 +218,12 @@ define i8 @test_redor_v3i8(<3 x i8> %a) {
 ; CHECK-NEXT:    movi v0.2d, #0000000000000000
 ; CHECK-NEXT:    mov v0.h[0], w0
 ; CHECK-NEXT:    mov v0.h[1], w1
-; CHECK-NEXT:    fmov x8, d0
 ; CHECK-NEXT:    mov v0.h[2], w2
-; CHECK-NEXT:    fmov x9, d0
-; CHECK-NEXT:    lsr x10, x9, #32
-; CHECK-NEXT:    lsr x9, x9, #16
-; CHECK-NEXT:    orr w8, w8, w10
-; CHECK-NEXT:    orr w0, w8, w9
+; CHECK-NEXT:    fmov x8, d0
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    lsr x10, x8, #16
+; CHECK-NEXT:    orr w8, w8, w9
+; CHECK-NEXT:    orr w0, w8, w10
 ; CHECK-NEXT:    ret
 ;
 ; GISEL-LABEL: test_redor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/reduce-xor.ll b/llvm/test/CodeGen/AArch64/reduce-xor.ll
index 9a00172f94763..df8485b91468f 100644
--- a/llvm/test/CodeGen/AArch64/reduce-xor.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-xor.ll
@@ -207,13 +207,12 @@ define i8 @test_redxor_v3i8(<3 x i8> %a) {
 ; CHECK-NEXT:    movi v0.2d, #0000000000000000
 ; CHECK-NEXT:    mov v0.h[0], w0
 ; CHECK-NEXT:    mov v0.h[1], w1
-; CHECK-NEXT:    fmov x8, d0
 ; CHECK-NEXT:    mov v0.h[2], w2
-; CHECK-NEXT:    fmov x9, d0
-; CHECK-NEXT:    lsr x10, x9, #32
-; CHECK-NEXT:    lsr x9, x9, #16
-; CHECK-NEXT:    eor w8, w8, w10
-; CHECK-NEXT:    eor w0, w8, w9
+; CHECK-NEXT:    fmov x8, d0
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    lsr x10, x8, #16
+; CHECK-NEXT:    eor w8, w8, w9
+; CHECK-NEXT:    eor w0, w8, w10
 ; CHECK-NEXT:    ret
 ;
 ; GISEL-LABEL: test_redxor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
index 7fa416e0dbcd5..e21ae88d52b47 100644
--- a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
+++ b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
@@ -125,12 +125,14 @@ define i8 @test_v9i8(<9 x i8> %a) nounwind {
 define i32 @test_v3i32(<3 x i32> %a) nounwind {
 ; CHECK-LABEL: test_v3i32:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT:    mov v1.16b, v0.16b
+; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
+; CHECK-NEXT:    mov v1.s[3], w8
+; CHECK-NEXT:    ext v1.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT:    and v0.8b, v0.8b, v1.8b
 ; CHECK-NEXT:    fmov x8, d0
-; CHECK-NEXT:    lsr x8, x8, #32
-; CHECK-NEXT:    and v1.8b, v0.8b, v1.8b
-; CHECK-NEXT:    fmov x9, d1
-; CHECK-NEXT:    and w0, w9, w8
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    and w0, w8, w9
 ; CHECK-NEXT:    ret
   %b = call i32 @llvm.vector.reduce.and.v3i32(<3 x i32> %a)
   ret i32 %b
diff --git a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
index d1090738e24a6..07fc5f30f23a3 100644
--- a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
+++ b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
@@ -1904,69 +1904,74 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; VI-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-NEXT:    v_add_u32_e32 v2, vcc, 5, v0
 ; VI-NEXT:    v_addc_u32_e32 v3, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v10, v[2:3]
-; VI-NEXT:    v_add_u32_e32 v2, vcc, 6, v0
-; VI-NEXT:    v_addc_u32_e32 v3, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v4, vcc, 1, v0
+; VI-NEXT:    v_add_u32_e32 v4, vcc, 4, v0
 ; VI-NEXT:    v_addc_u32_e32 v5, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v6, vcc, 2, v0
+; VI-NEXT:    v_add_u32_e32 v6, vcc, 1, v0
 ; VI-NEXT:    v_addc_u32_e32 v7, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v8, vcc, 3, v0
+; VI-NEXT:    v_add_u32_e32 v8, vcc, 2, v0
 ; VI-NEXT:    v_addc_u32_e32 v9, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v6, v[6:7]
-; VI-NEXT:    flat_load_ubyte v7, v[8:9]
-; VI-NEXT:    flat_load_ubyte v8, v[2:3]
-; VI-NEXT:    flat_load_ubyte v2, v[0:1]
+; VI-NEXT:    v_add_u32_e32 v10, vcc, 3, v0
+; VI-NEXT:    v_addc_u32_e32 v11, vcc, 0, v1, vcc
+; VI-NEXT:    v_add_u32_e32 v12, vcc, 6, v0
+; VI-NEXT:    v_addc_u32_e32 v13, vcc, 0, v1, vcc
+; VI-NEXT:    flat_load_ubyte v2, v[2:3]
 ; VI-NEXT:    flat_load_ubyte v4, v[4:5]
-; VI-NEXT:    v_add_u32_e32 v0, vcc, 4, v0
-; VI-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v9, v[0:1]
+; VI-NEXT:    flat_load_ubyte v5, v[6:7]
+; VI-NEXT:    flat_load_ubyte v7, v[8:9]
+; VI-NEXT:    flat_load_ubyte v3, v[10:11]
+; VI-NEXT:    flat_load_ubyte v6, v[12:13]
+; VI-NEXT:    flat_load_ubyte v0, v[0:1]
+; VI-NEXT:    v_mov_b32_e32 v8, 0x3020504
 ; VI-NEXT:    s_mov_b32 s3, 0xf000
 ; VI-NEXT:    s_mov_b32 s2, -1
 ; VI-NEXT:    s_waitcnt vmcnt(6)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v5, v10
+; VI-NEXT:    v_lshlrev_b32_e32 v9, 8, v2
+; VI-NEXT:    s_waitcnt vmcnt(5)
+; VI-NEXT:    v_or_b32_e32 v4, v9, v4
 ; VI-NEXT:    s_waitcnt vmcnt(4)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v3, v7
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; VI-NEXT:    s_waitcnt vmcnt(3)
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v2, v7
 ; VI-NEXT:    s_waitcnt vmcnt(2)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v0, v2
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v2, v6
-; VI-NEXT:    s_waitcnt vmcnt(1)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v1, v4
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v6, v8
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v3, v3
+; VI-NEXT:    v_perm_b32 v4, v4, s0, v8
 ; VI-NEXT:    s_waitcnt vmcnt(0)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v4, v9
-; VI-NEXT:    buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v6, v6
+; VI-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; VI-NEXT:    buffer_store_dwordx4 v[0:3], off, s[0:3], 0
+; VI-NEXT:    buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
 ; VI-NEXT:    s_endpgm
 ;
 ; GFX10-LABEL: load_v7i8_to_v7f32:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0x24
 ; GFX10-NEXT:    v_lshlrev_b32_e32 v0, 3, v0
-; GFX10-NEXT:    v_mov_b32_e32 v8, 0
+; GFX10-NEXT:    v_mov_b32_e32 v4, 0
+; GFX10-NEXT:    v_mov_b32_e32 v7, 0
 ; GFX10-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX10-NEXT:    s_clause 0x5
-; GFX10-NEXT:    global_load_ubyte v4, v0, s[2:3] offset:6
+; GFX10-NEXT:    global_load_ubyte v5, v0, s[2:3] offset:6
 ; GFX10-NEXT:    global_load_ubyte v1, v0, s[2:3] offset:3
 ; GFX10-NEXT:    global_load_ubyte v2, v0, s[2:3] offset:2
-; GFX10-NEXT:    global_load_ubyte v5, v0, s[2:3] offset:1
-; GFX10-NEXT:    global_load_short_d16 v7, v0, s[2:3] offset:4
+; GFX10-NEXT:    global_load_ubyte v6, v0, s[2:3] offset:1
+; GFX10-NEXT:    global_load_short_d16 v4, v0, s[2:3] offset:4
 ; GFX10-NEXT:    global_load_ubyte v0, v0, s[2:3]
-; GFX10-NEXT:    s_waitcnt vmcnt(5)
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v6, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(4)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v3, v1
 ; GFX10-NEXT:    s_waitcnt vmcnt(3)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v2, v2
 ; GFX10-NEXT:    s_waitcnt vmcnt(2)
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v1, v6
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v6, v5
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
-; GFX10-NEXT:    v_cvt_f32_ubyte1_e32 v5, v7
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v4, v7
+; GFX10-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
-; GFX10-NEXT:    global_store_dwordx3 v8, v[4:6], s[0:1] offset:16
-; GFX10-NEXT:    global_store_dwordx4 v8, v[0:3], s[0:1]
+; GFX10-NEXT:    global_store_dwordx3 v7, v[4:6], s[0:1] offset:16
+; GFX10-NEXT:    global_store_dwordx4 v7, v[0:3], s[0:1]
 ; GFX10-NEXT:    s_endpgm
 ;
 ; GFX9-LABEL: load_v7i8_to_v7f32:
@@ -1984,8 +1989,8 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; GFX9-NEXT:    s_waitcnt vmcnt(5)
 ; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v6, v1
 ; GFX9-NEXT:    s_waitcnt vmcnt(4)
-; GFX9-NEXT:    v_cvt_f32_ubyte1_e32 v5, v2
-; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v4, v2
+; GFX9-NEXT:    v_cvt_f32_ubyte1_sdwa v5, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
+; GFX9-NEXT:    v_cvt_f32_ubyte0_sdwa v4, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
 ; GFX9-NEXT:    s_waitcnt vmcnt(3)
 ; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v3, v3
 ; GFX9-NEXT:    s_waitcnt vmcnt(2)
@@ -2001,34 +2006,33 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; GFX11-LABEL: load_v7i8_to_v7f32:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_load_b128 s[0:3], s[4:5], 0x24
-; GFX11-NEXT:    v_and_b32_e32 v0, 0x3ff, v0
-; GFX11-NEXT:    v_mov_b32_e32 v8, 0
+; GFX11-NEXT:    v_dual_mov_b32 v7, 0 :: v_dual_and_b32 v0, 0x3ff, v0
+; GFX11-NEXT:    v_mov_b32_e32 v4, 0
 ; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_2)
 ; GFX11-NEXT:    v_lshlrev_b32_e32 v0, 3, v0
 ; GFX11-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX11-NEXT:    s_clause 0x5
-; GFX11-NEXT:    global_load_u8 v4, v0, s[2:3] offset:6
+; GFX11-NEXT:    global_load_u8 v5, v0, s[2:3] offset:6
 ; GFX11-NEXT:    global_load_u8 v1, v0, s[2:3] offset:3
 ; GFX11-NEXT:    global_load_u8 v2, v0, s[2:3] offset:2
-; GFX11-NEXT:    global_load_u8 v5, v0, s[2:3] offset:1
-; GFX11-NEXT:    global_load_d16_b16 v7, v0, s[2:3] offset:4
+; GFX11-NEXT:    global_load_u8 v6, v0, s[2:3] offset:1
+; GFX11-NEXT:    global_load_d16_b16 v4, v0, s[2:3] offset:4
 ; GFX11-NEXT:    global_load_u8 v0, v0, s[2:3]
-; GFX11-NEXT:    s_waitcnt vmcnt(5)
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v6, v4
 ; GFX11-NEXT:    s_waitcnt vmcnt(4)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v3, v1
 ; GFX11-NEXT:    s_waitcnt vmcnt(3)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v2, v2
 ; GFX11-NEXT:    s_waitcnt vmcnt(2)
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v1, v6
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v6, v5
 ; GFX11-NEXT:    s_waitcnt vmcnt(1)
-; GFX11-NEXT:    v_cvt_f32_ubyte1_e32 v5, v7
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v4, v7
+; GFX11-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; GFX11-NEXT:    s_waitcnt vmcnt(0)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
 ; GFX11-NEXT:    s_clause 0x1
-; GFX11-NEXT:    global_store_b96 v8, v[4:6], s[0:1] offset:16
-; GFX11-NEXT:    global_store_b128 v8, v[0:3], s[0:1]
+; GFX11-NEXT:    global_store_b96 v7, v[4:6], s[0:1] offset:16
+; GFX11-NEXT:    global_store_b128 v7, v[0:3], s[0:1]
 ; GFX11-NEXT:    s_endpgm
   %tid = call i32 @llvm.amdgcn.workitem.id.x()
   %gep = getelementptr <7 x i8>, ptr addrspace(1) %in, i32 %tid
diff --git a/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll b/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
index 933c6506d0270..c29039b86e82b 100644
--- a/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
+++ b/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
@@ -150,9 +150,10 @@ define i32 @mul_one_bit_hi_hi_u32_lshr_ashr(i32 %arg, i32 %arg1, ptr %arg2) {
 ; CHECK-LABEL: mul_one_bit_hi_hi_u32_lshr_ashr:
 ; CHECK:       ; %bb.0: ; %bb
 ; CHECK-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; CHECK-NEXT:    v_mul_hi_u32 v4, v1, v0
-; CHECK-NEXT:    v_ashrrev_i64 v[0:1], 33, v[3:4]
-; CHECK-NEXT:    flat_store_dword v[2:3], v4
+; CHECK-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v1, v0, 0
+; CHECK-NEXT:    v_mul_hi_u32 v6, v1, v0
+; CHECK-NEXT:    v_ashrrev_i64 v[0:1], 33, v[4:5]
+; CHECK-NEXT:    flat_store_dword v[2:3], v6
 ; CHECK-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
 ; CHECK-NEXT:    s_setpc_b64 s[30:31]
 bb:
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index a9240eff8e691..6b7f648f65a45 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8340,191 +8340,216 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0x9
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX6-NEXT:    s_load_dwordx2 s[4:5], s[2:3], 0x0
+; GFX6-NEXT:    s_mov_b32 s7, 0
 ; GFX6-NEXT:    s_mov_b32 s3, 0xf000
-; GFX6-NEXT:    s_mov_b32 s2, -1
+; GFX6-NEXT:    s_mov_b32 s9, s7
+; GFX6-NEXT:    s_mov_b32 s11, s7
+; GFX6-NEXT:    s_mov_b32 s13, s7
+; GFX6-NEXT:    s_mov_b32 s17, s7
+; GFX6-NEXT:    s_mov_b32 s19, s7
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 30
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 28
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 29
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 26
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 24
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 25
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 22
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 23
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 20
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 21
-; GFX6-NEXT:    s_lshr_b32 s14, s5, 18
-; GFX6-NEXT:    s_lshr_b32 s16, s5, 19
-; GFX6-NEXT:    s_lshr_b32 s10, s5, 16
-; GFX6-NEXT:    s_lshr_b32 s12, s5, 17
-; GFX6-NEXT:    s_lshr_b32 s6, s5, 14
-; GFX6-NEXT:    s_lshr_b32 s8, s5, 15
-; GFX6-NEXT:    s_mov_b32 s40, s5
-; GFX6-NEXT:    s_ashr_i32 s7, s5, 31
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[40:41], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v4, s7
-; GFX6-NEXT:    s_lshr_b32 s40, s5, 12
+; GFX6-NEXT:    s_lshr_b32 s6, s5, 30
+; GFX6-NEXT:    s_lshr_b32 s8, s5, 28
+; GFX6-NEXT:    s_lshr_b32 s10, s5, 29
+; GFX6-NEXT:    s_lshr_b32 s12, s5, 26
+; GFX6-NEXT:    s_lshr_b32 s16, s5, 27
+; GFX6-NEXT:    s_mov_b32 s18, s5
+; GFX6-NEXT:    s_bfe_i64 s[14:15], s[4:5], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[18:19], 0x10000
+; GFX6-NEXT:    s_ashr_i32 s18, s5, 31
+; GFX6-NEXT:    s_bfe_i64 s[28:29], s[16:17], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[36:37], s[12:13], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[38:39], s[10:11], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[40:41], s[8:9], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[42:43], s[6:7], 0x10000
+; GFX6-NEXT:    s_mov_b32 s2, -1
+; GFX6-NEXT:    s_mov_b32 s31, s7
+; GFX6-NEXT:    s_mov_b32 s35, s7
+; GFX6-NEXT:    s_mov_b32 s25, s7
+; GFX6-NEXT:    s_mov_b32 s27, s7
+; GFX6-NEXT:    s_mov_b32 s21, s7
+; GFX6-NEXT:    s_mov_b32 s23, s7
+; GFX6-NEXT:    v_mov_b32_e32 v4, s18
 ; GFX6-NEXT:    v_mov_b32_e32 v0, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v1, s45
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[42:43], s[42:43], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v6, s44
-; GFX6-NEXT:    v_mov_b32_e32 v7, s45
-; GFX6-NEXT:    s_lshr_b32 s44, s5, 13
+; GFX6-NEXT:    s_mov_b32 s45, s7
+; GFX6-NEXT:    v_mov_b32_e32 v6, s14
+; GFX6-NEXT:    v_mov_b32_e32 v7, s15
+; GFX6-NEXT:    s_mov_b32 s47, s7
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s42
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s43
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 10
-; GFX6-NEXT:    s_bfe_i64 s[36:37], s[36:37], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[38:39], s[38:39], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v8, s36
-; GFX6-NEXT:    v_mov_b32_e32 v9, s37
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 11
+; GFX6-NEXT:    s_mov_b32 s43, s7
+; GFX6-NEXT:    v_mov_b32_e32 v8, s40
+; GFX6-NEXT:    v_mov_b32_e32 v9, s41
+; GFX6-NEXT:    s_mov_b32 s41, s7
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s38
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s39
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 8
-; GFX6-NEXT:    s_bfe_i64 s[30:31], s[30:31], 0x10000
+; GFX6-NEXT:    s_mov_b32 s39, s7
+; GFX6-NEXT:    v_mov_b32_e32 v12, s36
+; GFX6-NEXT:    v_mov_b32_e32 v13, s37
+; GFX6-NEXT:    s_mov_b32 s15, s7
+; GFX6-NEXT:    v_mov_b32_e32 v14, s28
+; GFX6-NEXT:    v_mov_b32_e32 v15, s29
+; GFX6-NEXT:    s_mov_b32 s37, s7
+; GFX6-NEXT:    s_lshr_b32 s30, s5, 24
+; GFX6-NEXT:    s_lshr_b32 s34, s5, 25
 ; GFX6-NEXT:    s_bfe_i64 s[34:35], s[34:35], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v12, s30
-; GFX6-NEXT:    v_mov_b32_e32 v13, s31
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 9
-; GFX6-NEXT:    v_mov_b32_e32 v14, s34
-; GFX6-NEXT:    v_mov_b32_e32 v15, s35
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 6
-; GFX6-NEXT:    s_bfe_i64 s[28:29], s[28:29], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v5, s7
+; GFX6-NEXT:    s_bfe_i64 s[28:29], s[30:31], 0x10000
+; GFX6-NEXT:    v_mov_b32_e32 v5, s18
 ; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:496
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
-; GFX6-NEXT:    v_mov_b32_e32 v2, s26
-; GFX6-NEXT:    v_mov_b32_e32 v3, s27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 7
-; GFX6-NEXT:    v_mov_b32_e32 v4, s28
-; GFX6-NEXT:    v_mov_b32_e32 v5, s29
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 4
+; GFX6-NEXT:    v_mov_b32_e32 v2, s28
+; GFX6-NEXT:    v_mov_b32_e32 v3, s29
+; GFX6-NEXT:    s_mov_b32 s29, s7
+; GFX6-NEXT:    v_mov_b32_e32 v4, s34
+; GFX6-NEXT:    v_mov_b32_e32 v5, s35
+; GFX6-NEXT:    s_lshr_b32 s24, s5, 22
+; GFX6-NEXT:    s_lshr_b32 s26, s5, 23
+; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[24:25], s[24:25], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
 ; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
-; GFX6-NEXT:    v_mov_b32_e32 v8, s22
-; GFX6-NEXT:    v_mov_b32_e32 v9, s23
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 5
-; GFX6-NEXT:    v_mov_b32_e32 v10, s24
-; GFX6-NEXT:    v_mov_b32_e32 v11, s25
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 2
+; GFX6-NEXT:    v_mov_b32_e32 v8, s24
+; GFX6-NEXT:    v_mov_b32_e32 v9, s25
+; GFX6-NEXT:    s_mov_b32 s25, s7
+; GFX6-NEXT:    v_mov_b32_e32 v10, s26
+; GFX6-NEXT:    v_mov_b32_e32 v11, s27
+; GFX6-NEXT:    s_mov_b32 s27, s7
+; GFX6-NEXT:    s_lshr_b32 s20, s5, 20
+; GFX6-NEXT:    s_lshr_b32 s22, s5, 21
+; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[20:21], s[20:21], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[18:19], s[18:19], 0x10000
 ; GFX6-...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented May 8, 2025

@llvm/pr-subscribers-backend-arm

Author: Björn Pettersson (bjope)

Changes

When we have a BITCAST and the source type is a vector with smaller elements compared to the destination type, then we need to demand all the source elements that make up the demanded elts for the result when doing recursive calls to SimplifyDemandedBits, SimplifyDemandedVectorElts and SimplifyMultipleUseDemandedBits. Problem is that those simplifications are allowed to turn non-demanded elements of a vector into POISON, so unless we demand all source elements that make up the result there is a risk that the result would be more poisonous (even for demanded elts) after the simplification.

The patch fixes some bugs in SimplifyMultipleUseDemandedBits and SimplifyDemandedBits for situations when we did not consider the problem described above. Now we make sure that we also demand vector elements that "must not be turned into poison" even if those elements correspond to bits that does not need to be defined according to the DemandedBits mask.

Fixes #138513


Patch is 1.50 MiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/139085.diff

120 Files Affected:

  • (modified) llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp (+10-12)
  • (modified) llvm/test/CodeGen/AArch64/reduce-or.ll (+5-6)
  • (modified) llvm/test/CodeGen/AArch64/reduce-xor.ll (+5-6)
  • (modified) llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll (+7-5)
  • (modified) llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll (+51-47)
  • (modified) llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll (+4-3)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i1.ll (+324-278)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i16.ll (+60-30)
  • (modified) llvm/test/CodeGen/AMDGPU/load-constant-i8.ll (+487-409)
  • (modified) llvm/test/CodeGen/AMDGPU/load-range-metadata-sign-bits.ll (+3-2)
  • (modified) llvm/test/CodeGen/AMDGPU/mad_64_32.ll (+7-6)
  • (modified) llvm/test/CodeGen/AMDGPU/mul_int24.ll (+32-28)
  • (modified) llvm/test/CodeGen/AMDGPU/sdiv64.ll (+40-40)
  • (modified) llvm/test/CodeGen/AMDGPU/shift-i128.ll (+9-7)
  • (modified) llvm/test/CodeGen/AMDGPU/srem64.ll (+40-40)
  • (modified) llvm/test/CodeGen/ARM/fpclamptosat_vec.ll (+234-210)
  • (modified) llvm/test/CodeGen/Thumb2/mve-fpclamptosat_vec.ll (+42-38)
  • (modified) llvm/test/CodeGen/Thumb2/mve-gather-ind8-unscaled.ll (-5)
  • (modified) llvm/test/CodeGen/Thumb2/mve-laneinterleaving.ll (+45-41)
  • (modified) llvm/test/CodeGen/Thumb2/mve-pred-ext.ll (+1)
  • (modified) llvm/test/CodeGen/Thumb2/mve-satmul-loops.ll (+118-95)
  • (modified) llvm/test/CodeGen/Thumb2/mve-scatter-ind8-unscaled.ll (+2-7)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vecreduce-addpred.ll (+4-4)
  • (modified) llvm/test/CodeGen/Thumb2/mve-vecreduce-mlapred.ll (+4-4)
  • (modified) llvm/test/CodeGen/X86/avx512-intrinsics-fast-isel.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/avx512-intrinsics-upgrade.ll (+32-14)
  • (modified) llvm/test/CodeGen/X86/avx512vl-intrinsics-upgrade.ll (+38-14)
  • (modified) llvm/test/CodeGen/X86/avx512vl-vec-masked-cmp.ll (+140-110)
  • (modified) llvm/test/CodeGen/X86/bitcast-and-setcc-128.ll (+16-12)
  • (modified) llvm/test/CodeGen/X86/bitcast-setcc-128.ll (+7-5)
  • (modified) llvm/test/CodeGen/X86/bitcast-setcc-512.ll (+8-3)
  • (modified) llvm/test/CodeGen/X86/bitcast-vector-bool.ll (+12-3)
  • (modified) llvm/test/CodeGen/X86/buildvec-widen-dotproduct.ll (+37-34)
  • (modified) llvm/test/CodeGen/X86/combine-pmuldq.ll (+69-18)
  • (modified) llvm/test/CodeGen/X86/combine-sdiv.ll (+26-25)
  • (modified) llvm/test/CodeGen/X86/combine-sra.ll (+32-31)
  • (modified) llvm/test/CodeGen/X86/combine-udiv.ll (+4-1)
  • (modified) llvm/test/CodeGen/X86/f16c-intrinsics-fast-isel.ll (+4)
  • (modified) llvm/test/CodeGen/X86/fminimum-fmaximum.ll (+12-12)
  • (modified) llvm/test/CodeGen/X86/fminimumnum-fmaximumnum.ll (+12-12)
  • (modified) llvm/test/CodeGen/X86/fold-int-pow2-with-fmul-or-fdiv.ll (+44-45)
  • (modified) llvm/test/CodeGen/X86/gfni-funnel-shifts.ll (+150-150)
  • (modified) llvm/test/CodeGen/X86/hoist-and-by-const-from-shl-in-eqcmp-zero.ll (+20-28)
  • (modified) llvm/test/CodeGen/X86/known-never-zero.ll (+2-1)
  • (modified) llvm/test/CodeGen/X86/known-pow2.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/known-signbits-shl.ll (+2-1)
  • (modified) llvm/test/CodeGen/X86/known-signbits-vector.ll (+23-7)
  • (modified) llvm/test/CodeGen/X86/masked_store.ll (+13-5)
  • (modified) llvm/test/CodeGen/X86/movmsk-cmp.ll (+84-25)
  • (modified) llvm/test/CodeGen/X86/mulvi32.ll (+8-7)
  • (modified) llvm/test/CodeGen/X86/omit-urem-of-power-of-two-or-zero-when-comparing-with-zero.ll (+14-11)
  • (modified) llvm/test/CodeGen/X86/pmul.ll (+74-66)
  • (modified) llvm/test/CodeGen/X86/pmulh.ll (+4-2)
  • (modified) llvm/test/CodeGen/X86/pr107423.ll (+13-13)
  • (modified) llvm/test/CodeGen/X86/pr35918.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/pr41619.ll (+2)
  • (modified) llvm/test/CodeGen/X86/pr42727.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr45563-2.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr45833.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/pr77459.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/promote-cmp.ll (+22-19)
  • (modified) llvm/test/CodeGen/X86/promote-vec3.ll (+6-3)
  • (modified) llvm/test/CodeGen/X86/psubus.ll (+132-123)
  • (modified) llvm/test/CodeGen/X86/rotate-extract-vector.ll (+14-26)
  • (modified) llvm/test/CodeGen/X86/sadd_sat_vec.ll (+114-93)
  • (modified) llvm/test/CodeGen/X86/sat-add.ll (+12-9)
  • (modified) llvm/test/CodeGen/X86/sdiv-exact.ll (+17-13)
  • (modified) llvm/test/CodeGen/X86/sdiv_fix_sat.ll (+131-130)
  • (modified) llvm/test/CodeGen/X86/shrink_vmul.ll (+2)
  • (modified) llvm/test/CodeGen/X86/srem-seteq-vec-nonsplat.ll (+11-9)
  • (modified) llvm/test/CodeGen/X86/sshl_sat_vec.ll (+66-68)
  • (modified) llvm/test/CodeGen/X86/ssub_sat_vec.ll (+151-123)
  • (modified) llvm/test/CodeGen/X86/test-shrink-bug.ll (+1-1)
  • (modified) llvm/test/CodeGen/X86/ucmp.ll (+221-214)
  • (modified) llvm/test/CodeGen/X86/udiv-exact.ll (+17-13)
  • (modified) llvm/test/CodeGen/X86/urem-seteq-illegal-types.ll (+3-5)
  • (modified) llvm/test/CodeGen/X86/urem-seteq-vec-nonsplat.ll (+172-122)
  • (modified) llvm/test/CodeGen/X86/ushl_sat_vec.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vec-strict-inttofp-256.ll (+43-37)
  • (modified) llvm/test/CodeGen/X86/vec_int_to_fp.ll (+159-126)
  • (modified) llvm/test/CodeGen/X86/vec_minmax_sint.ll (+92-66)
  • (modified) llvm/test/CodeGen/X86/vec_minmax_uint.ll (+92-66)
  • (modified) llvm/test/CodeGen/X86/vec_smulo.ll (+143-139)
  • (modified) llvm/test/CodeGen/X86/vec_umulo.ll (+99-89)
  • (modified) llvm/test/CodeGen/X86/vector-compare-all_of.ll (+21-15)
  • (modified) llvm/test/CodeGen/X86/vector-compare-any_of.ll (+21-15)
  • (modified) llvm/test/CodeGen/X86/vector-constrained-fp-intrinsics.ll (+23-19)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-128.ll (+159-139)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-256.ll (+48-47)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-128.ll (+56-37)
  • (modified) llvm/test/CodeGen/X86/vector-fshl-rot-256.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-128.ll (+33-27)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-256.ll (+20-19)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-128.ll (+33-28)
  • (modified) llvm/test/CodeGen/X86/vector-fshr-rot-256.ll (+2-2)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i32-stride-5.ll (+9-9)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i32-stride-7.ll (+3214-3353)
  • (modified) llvm/test/CodeGen/X86/vector-interleaved-store-i8-stride-8.ll (+222-214)
  • (modified) llvm/test/CodeGen/X86/vector-mul.ll (+34-34)
  • (modified) llvm/test/CodeGen/X86/vector-pcmp.ll (+3-2)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-fmaximum.ll (+82-81)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-mul.ll (+112-56)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-smax.ll (+99-69)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-smin.ll (+96-65)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-umax.ll (+99-69)
  • (modified) llvm/test/CodeGen/X86/vector-reduce-umin.ll (+96-65)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-128.ll (+56-37)
  • (modified) llvm/test/CodeGen/X86/vector-rotate-256.ll (+15-14)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-128.ll (+28-24)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-256.ll (+22-20)
  • (modified) llvm/test/CodeGen/X86/vector-shift-shl-sub128.ll (+56-48)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-256-v16.ll (+16-16)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining-avx.ll (+6-6)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining-ssse3.ll (+3-6)
  • (modified) llvm/test/CodeGen/X86/vector-shuffle-combining.ll (+6-3)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-packus.ll (+945-806)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-ssat.ll (+734-609)
  • (modified) llvm/test/CodeGen/X86/vector-trunc-usat.ll (+398-368)
  • (modified) llvm/test/CodeGen/X86/vector_splat-const-shift-of-constmasked.ll (+4-5)
  • (modified) llvm/test/CodeGen/X86/vselect.ll (+10-20)
diff --git a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
index ba34c72156228..d5697b6031537 100644
--- a/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
+++ b/llvm/lib/CodeGen/SelectionDAG/TargetLowering.cpp
@@ -711,18 +711,17 @@ SDValue TargetLowering::SimplifyMultipleUseDemandedBits(
       unsigned Scale = NumDstEltBits / NumSrcEltBits;
       unsigned NumSrcElts = SrcVT.getVectorNumElements();
       APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
-      APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
       for (unsigned i = 0; i != Scale; ++i) {
         unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
         unsigned BitOffset = EltOffset * NumSrcEltBits;
         APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
-        if (!Sub.isZero()) {
+        if (!Sub.isZero())
           DemandedSrcBits |= Sub;
-          for (unsigned j = 0; j != NumElts; ++j)
-            if (DemandedElts[j])
-              DemandedSrcElts.setBit((j * Scale) + i);
-        }
       }
+      // Need to demand all smaller source elements that maps to a demanded
+      // destination element, since recursive calls below may turn not demanded
+      // elements into poison.
+      APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
 
       if (SDValue V = SimplifyMultipleUseDemandedBits(
               Src, DemandedSrcBits, DemandedSrcElts, DAG, Depth + 1))
@@ -2755,18 +2754,17 @@ bool TargetLowering::SimplifyDemandedBits(
       unsigned Scale = BitWidth / NumSrcEltBits;
       unsigned NumSrcElts = SrcVT.getVectorNumElements();
       APInt DemandedSrcBits = APInt::getZero(NumSrcEltBits);
-      APInt DemandedSrcElts = APInt::getZero(NumSrcElts);
       for (unsigned i = 0; i != Scale; ++i) {
         unsigned EltOffset = IsLE ? i : (Scale - 1 - i);
         unsigned BitOffset = EltOffset * NumSrcEltBits;
         APInt Sub = DemandedBits.extractBits(NumSrcEltBits, BitOffset);
-        if (!Sub.isZero()) {
+        if (!Sub.isZero())
           DemandedSrcBits |= Sub;
-          for (unsigned j = 0; j != NumElts; ++j)
-            if (DemandedElts[j])
-              DemandedSrcElts.setBit((j * Scale) + i);
-        }
       }
+      // Need to demand all smaller source elements that maps to a demanded
+      // destination element, since recursive calls below may turn not demanded
+      // elements into poison.
+      APInt DemandedSrcElts = APIntOps::ScaleBitMask(DemandedElts, NumSrcElts);
 
       APInt KnownSrcUndef, KnownSrcZero;
       if (SimplifyDemandedVectorElts(Src, DemandedSrcElts, KnownSrcUndef,
diff --git a/llvm/test/CodeGen/AArch64/reduce-or.ll b/llvm/test/CodeGen/AArch64/reduce-or.ll
index aac31ce8b71b7..f5291f5debb40 100644
--- a/llvm/test/CodeGen/AArch64/reduce-or.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-or.ll
@@ -218,13 +218,12 @@ define i8 @test_redor_v3i8(<3 x i8> %a) {
 ; CHECK-NEXT:    movi v0.2d, #0000000000000000
 ; CHECK-NEXT:    mov v0.h[0], w0
 ; CHECK-NEXT:    mov v0.h[1], w1
-; CHECK-NEXT:    fmov x8, d0
 ; CHECK-NEXT:    mov v0.h[2], w2
-; CHECK-NEXT:    fmov x9, d0
-; CHECK-NEXT:    lsr x10, x9, #32
-; CHECK-NEXT:    lsr x9, x9, #16
-; CHECK-NEXT:    orr w8, w8, w10
-; CHECK-NEXT:    orr w0, w8, w9
+; CHECK-NEXT:    fmov x8, d0
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    lsr x10, x8, #16
+; CHECK-NEXT:    orr w8, w8, w9
+; CHECK-NEXT:    orr w0, w8, w10
 ; CHECK-NEXT:    ret
 ;
 ; GISEL-LABEL: test_redor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/reduce-xor.ll b/llvm/test/CodeGen/AArch64/reduce-xor.ll
index 9a00172f94763..df8485b91468f 100644
--- a/llvm/test/CodeGen/AArch64/reduce-xor.ll
+++ b/llvm/test/CodeGen/AArch64/reduce-xor.ll
@@ -207,13 +207,12 @@ define i8 @test_redxor_v3i8(<3 x i8> %a) {
 ; CHECK-NEXT:    movi v0.2d, #0000000000000000
 ; CHECK-NEXT:    mov v0.h[0], w0
 ; CHECK-NEXT:    mov v0.h[1], w1
-; CHECK-NEXT:    fmov x8, d0
 ; CHECK-NEXT:    mov v0.h[2], w2
-; CHECK-NEXT:    fmov x9, d0
-; CHECK-NEXT:    lsr x10, x9, #32
-; CHECK-NEXT:    lsr x9, x9, #16
-; CHECK-NEXT:    eor w8, w8, w10
-; CHECK-NEXT:    eor w0, w8, w9
+; CHECK-NEXT:    fmov x8, d0
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    lsr x10, x8, #16
+; CHECK-NEXT:    eor w8, w8, w9
+; CHECK-NEXT:    eor w0, w8, w10
 ; CHECK-NEXT:    ret
 ;
 ; GISEL-LABEL: test_redxor_v3i8:
diff --git a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
index 7fa416e0dbcd5..e21ae88d52b47 100644
--- a/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
+++ b/llvm/test/CodeGen/AArch64/vecreduce-and-legalization.ll
@@ -125,12 +125,14 @@ define i8 @test_v9i8(<9 x i8> %a) nounwind {
 define i32 @test_v3i32(<3 x i32> %a) nounwind {
 ; CHECK-LABEL: test_v3i32:
 ; CHECK:       // %bb.0:
-; CHECK-NEXT:    ext v1.16b, v0.16b, v0.16b, #8
+; CHECK-NEXT:    mov v1.16b, v0.16b
+; CHECK-NEXT:    mov w8, #-1 // =0xffffffff
+; CHECK-NEXT:    mov v1.s[3], w8
+; CHECK-NEXT:    ext v1.16b, v1.16b, v1.16b, #8
+; CHECK-NEXT:    and v0.8b, v0.8b, v1.8b
 ; CHECK-NEXT:    fmov x8, d0
-; CHECK-NEXT:    lsr x8, x8, #32
-; CHECK-NEXT:    and v1.8b, v0.8b, v1.8b
-; CHECK-NEXT:    fmov x9, d1
-; CHECK-NEXT:    and w0, w9, w8
+; CHECK-NEXT:    lsr x9, x8, #32
+; CHECK-NEXT:    and w0, w8, w9
 ; CHECK-NEXT:    ret
   %b = call i32 @llvm.vector.reduce.and.v3i32(<3 x i32> %a)
   ret i32 %b
diff --git a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
index d1090738e24a6..07fc5f30f23a3 100644
--- a/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
+++ b/llvm/test/CodeGen/AMDGPU/cvt_f32_ubyte.ll
@@ -1904,69 +1904,74 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; VI-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
 ; VI-NEXT:    v_add_u32_e32 v2, vcc, 5, v0
 ; VI-NEXT:    v_addc_u32_e32 v3, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v10, v[2:3]
-; VI-NEXT:    v_add_u32_e32 v2, vcc, 6, v0
-; VI-NEXT:    v_addc_u32_e32 v3, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v4, vcc, 1, v0
+; VI-NEXT:    v_add_u32_e32 v4, vcc, 4, v0
 ; VI-NEXT:    v_addc_u32_e32 v5, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v6, vcc, 2, v0
+; VI-NEXT:    v_add_u32_e32 v6, vcc, 1, v0
 ; VI-NEXT:    v_addc_u32_e32 v7, vcc, 0, v1, vcc
-; VI-NEXT:    v_add_u32_e32 v8, vcc, 3, v0
+; VI-NEXT:    v_add_u32_e32 v8, vcc, 2, v0
 ; VI-NEXT:    v_addc_u32_e32 v9, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v6, v[6:7]
-; VI-NEXT:    flat_load_ubyte v7, v[8:9]
-; VI-NEXT:    flat_load_ubyte v8, v[2:3]
-; VI-NEXT:    flat_load_ubyte v2, v[0:1]
+; VI-NEXT:    v_add_u32_e32 v10, vcc, 3, v0
+; VI-NEXT:    v_addc_u32_e32 v11, vcc, 0, v1, vcc
+; VI-NEXT:    v_add_u32_e32 v12, vcc, 6, v0
+; VI-NEXT:    v_addc_u32_e32 v13, vcc, 0, v1, vcc
+; VI-NEXT:    flat_load_ubyte v2, v[2:3]
 ; VI-NEXT:    flat_load_ubyte v4, v[4:5]
-; VI-NEXT:    v_add_u32_e32 v0, vcc, 4, v0
-; VI-NEXT:    v_addc_u32_e32 v1, vcc, 0, v1, vcc
-; VI-NEXT:    flat_load_ubyte v9, v[0:1]
+; VI-NEXT:    flat_load_ubyte v5, v[6:7]
+; VI-NEXT:    flat_load_ubyte v7, v[8:9]
+; VI-NEXT:    flat_load_ubyte v3, v[10:11]
+; VI-NEXT:    flat_load_ubyte v6, v[12:13]
+; VI-NEXT:    flat_load_ubyte v0, v[0:1]
+; VI-NEXT:    v_mov_b32_e32 v8, 0x3020504
 ; VI-NEXT:    s_mov_b32 s3, 0xf000
 ; VI-NEXT:    s_mov_b32 s2, -1
 ; VI-NEXT:    s_waitcnt vmcnt(6)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v5, v10
+; VI-NEXT:    v_lshlrev_b32_e32 v9, 8, v2
+; VI-NEXT:    s_waitcnt vmcnt(5)
+; VI-NEXT:    v_or_b32_e32 v4, v9, v4
 ; VI-NEXT:    s_waitcnt vmcnt(4)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v3, v7
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; VI-NEXT:    s_waitcnt vmcnt(3)
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v2, v7
 ; VI-NEXT:    s_waitcnt vmcnt(2)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v0, v2
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v2, v6
-; VI-NEXT:    s_waitcnt vmcnt(1)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v1, v4
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v6, v8
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v3, v3
+; VI-NEXT:    v_perm_b32 v4, v4, s0, v8
 ; VI-NEXT:    s_waitcnt vmcnt(0)
-; VI-NEXT:    v_cvt_f32_ubyte0_e32 v4, v9
-; VI-NEXT:    buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v6, v6
+; VI-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; VI-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; VI-NEXT:    buffer_store_dwordx4 v[0:3], off, s[0:3], 0
+; VI-NEXT:    buffer_store_dwordx3 v[4:6], off, s[0:3], 0 offset:16
 ; VI-NEXT:    s_endpgm
 ;
 ; GFX10-LABEL: load_v7i8_to_v7f32:
 ; GFX10:       ; %bb.0:
 ; GFX10-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0x24
 ; GFX10-NEXT:    v_lshlrev_b32_e32 v0, 3, v0
-; GFX10-NEXT:    v_mov_b32_e32 v8, 0
+; GFX10-NEXT:    v_mov_b32_e32 v4, 0
+; GFX10-NEXT:    v_mov_b32_e32 v7, 0
 ; GFX10-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX10-NEXT:    s_clause 0x5
-; GFX10-NEXT:    global_load_ubyte v4, v0, s[2:3] offset:6
+; GFX10-NEXT:    global_load_ubyte v5, v0, s[2:3] offset:6
 ; GFX10-NEXT:    global_load_ubyte v1, v0, s[2:3] offset:3
 ; GFX10-NEXT:    global_load_ubyte v2, v0, s[2:3] offset:2
-; GFX10-NEXT:    global_load_ubyte v5, v0, s[2:3] offset:1
-; GFX10-NEXT:    global_load_short_d16 v7, v0, s[2:3] offset:4
+; GFX10-NEXT:    global_load_ubyte v6, v0, s[2:3] offset:1
+; GFX10-NEXT:    global_load_short_d16 v4, v0, s[2:3] offset:4
 ; GFX10-NEXT:    global_load_ubyte v0, v0, s[2:3]
-; GFX10-NEXT:    s_waitcnt vmcnt(5)
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v6, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(4)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v3, v1
 ; GFX10-NEXT:    s_waitcnt vmcnt(3)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v2, v2
 ; GFX10-NEXT:    s_waitcnt vmcnt(2)
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v1, v6
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v6, v5
 ; GFX10-NEXT:    s_waitcnt vmcnt(1)
-; GFX10-NEXT:    v_cvt_f32_ubyte1_e32 v5, v7
-; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v4, v7
+; GFX10-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; GFX10-NEXT:    s_waitcnt vmcnt(0)
 ; GFX10-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
-; GFX10-NEXT:    global_store_dwordx3 v8, v[4:6], s[0:1] offset:16
-; GFX10-NEXT:    global_store_dwordx4 v8, v[0:3], s[0:1]
+; GFX10-NEXT:    global_store_dwordx3 v7, v[4:6], s[0:1] offset:16
+; GFX10-NEXT:    global_store_dwordx4 v7, v[0:3], s[0:1]
 ; GFX10-NEXT:    s_endpgm
 ;
 ; GFX9-LABEL: load_v7i8_to_v7f32:
@@ -1984,8 +1989,8 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; GFX9-NEXT:    s_waitcnt vmcnt(5)
 ; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v6, v1
 ; GFX9-NEXT:    s_waitcnt vmcnt(4)
-; GFX9-NEXT:    v_cvt_f32_ubyte1_e32 v5, v2
-; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v4, v2
+; GFX9-NEXT:    v_cvt_f32_ubyte1_sdwa v5, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
+; GFX9-NEXT:    v_cvt_f32_ubyte0_sdwa v4, v2 dst_sel:DWORD dst_unused:UNUSED_PAD src0_sel:WORD_0
 ; GFX9-NEXT:    s_waitcnt vmcnt(3)
 ; GFX9-NEXT:    v_cvt_f32_ubyte0_e32 v3, v3
 ; GFX9-NEXT:    s_waitcnt vmcnt(2)
@@ -2001,34 +2006,33 @@ define amdgpu_kernel void @load_v7i8_to_v7f32(ptr addrspace(1) noalias %out, ptr
 ; GFX11-LABEL: load_v7i8_to_v7f32:
 ; GFX11:       ; %bb.0:
 ; GFX11-NEXT:    s_load_b128 s[0:3], s[4:5], 0x24
-; GFX11-NEXT:    v_and_b32_e32 v0, 0x3ff, v0
-; GFX11-NEXT:    v_mov_b32_e32 v8, 0
+; GFX11-NEXT:    v_dual_mov_b32 v7, 0 :: v_dual_and_b32 v0, 0x3ff, v0
+; GFX11-NEXT:    v_mov_b32_e32 v4, 0
 ; GFX11-NEXT:    s_delay_alu instid0(VALU_DEP_2)
 ; GFX11-NEXT:    v_lshlrev_b32_e32 v0, 3, v0
 ; GFX11-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX11-NEXT:    s_clause 0x5
-; GFX11-NEXT:    global_load_u8 v4, v0, s[2:3] offset:6
+; GFX11-NEXT:    global_load_u8 v5, v0, s[2:3] offset:6
 ; GFX11-NEXT:    global_load_u8 v1, v0, s[2:3] offset:3
 ; GFX11-NEXT:    global_load_u8 v2, v0, s[2:3] offset:2
-; GFX11-NEXT:    global_load_u8 v5, v0, s[2:3] offset:1
-; GFX11-NEXT:    global_load_d16_b16 v7, v0, s[2:3] offset:4
+; GFX11-NEXT:    global_load_u8 v6, v0, s[2:3] offset:1
+; GFX11-NEXT:    global_load_d16_b16 v4, v0, s[2:3] offset:4
 ; GFX11-NEXT:    global_load_u8 v0, v0, s[2:3]
-; GFX11-NEXT:    s_waitcnt vmcnt(5)
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v6, v4
 ; GFX11-NEXT:    s_waitcnt vmcnt(4)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v3, v1
 ; GFX11-NEXT:    s_waitcnt vmcnt(3)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v2, v2
 ; GFX11-NEXT:    s_waitcnt vmcnt(2)
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v1, v5
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v1, v6
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v6, v5
 ; GFX11-NEXT:    s_waitcnt vmcnt(1)
-; GFX11-NEXT:    v_cvt_f32_ubyte1_e32 v5, v7
-; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v4, v7
+; GFX11-NEXT:    v_cvt_f32_ubyte1_e32 v5, v4
+; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v4, v4
 ; GFX11-NEXT:    s_waitcnt vmcnt(0)
 ; GFX11-NEXT:    v_cvt_f32_ubyte0_e32 v0, v0
 ; GFX11-NEXT:    s_clause 0x1
-; GFX11-NEXT:    global_store_b96 v8, v[4:6], s[0:1] offset:16
-; GFX11-NEXT:    global_store_b128 v8, v[0:3], s[0:1]
+; GFX11-NEXT:    global_store_b96 v7, v[4:6], s[0:1] offset:16
+; GFX11-NEXT:    global_store_b128 v7, v[0:3], s[0:1]
 ; GFX11-NEXT:    s_endpgm
   %tid = call i32 @llvm.amdgcn.workitem.id.x()
   %gep = getelementptr <7 x i8>, ptr addrspace(1) %in, i32 %tid
diff --git a/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll b/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
index 933c6506d0270..c29039b86e82b 100644
--- a/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
+++ b/llvm/test/CodeGen/AMDGPU/dagcomb-mullohi.ll
@@ -150,9 +150,10 @@ define i32 @mul_one_bit_hi_hi_u32_lshr_ashr(i32 %arg, i32 %arg1, ptr %arg2) {
 ; CHECK-LABEL: mul_one_bit_hi_hi_u32_lshr_ashr:
 ; CHECK:       ; %bb.0: ; %bb
 ; CHECK-NEXT:    s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
-; CHECK-NEXT:    v_mul_hi_u32 v4, v1, v0
-; CHECK-NEXT:    v_ashrrev_i64 v[0:1], 33, v[3:4]
-; CHECK-NEXT:    flat_store_dword v[2:3], v4
+; CHECK-NEXT:    v_mad_u64_u32 v[4:5], s[4:5], v1, v0, 0
+; CHECK-NEXT:    v_mul_hi_u32 v6, v1, v0
+; CHECK-NEXT:    v_ashrrev_i64 v[0:1], 33, v[4:5]
+; CHECK-NEXT:    flat_store_dword v[2:3], v6
 ; CHECK-NEXT:    s_waitcnt vmcnt(0) lgkmcnt(0)
 ; CHECK-NEXT:    s_setpc_b64 s[30:31]
 bb:
diff --git a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
index a9240eff8e691..6b7f648f65a45 100644
--- a/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
+++ b/llvm/test/CodeGen/AMDGPU/load-constant-i1.ll
@@ -8340,191 +8340,216 @@ define amdgpu_kernel void @constant_sextload_v64i1_to_v64i64(ptr addrspace(1) %o
 ; GFX6-NEXT:    s_load_dwordx4 s[0:3], s[4:5], 0x9
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
 ; GFX6-NEXT:    s_load_dwordx2 s[4:5], s[2:3], 0x0
+; GFX6-NEXT:    s_mov_b32 s7, 0
 ; GFX6-NEXT:    s_mov_b32 s3, 0xf000
-; GFX6-NEXT:    s_mov_b32 s2, -1
+; GFX6-NEXT:    s_mov_b32 s9, s7
+; GFX6-NEXT:    s_mov_b32 s11, s7
+; GFX6-NEXT:    s_mov_b32 s13, s7
+; GFX6-NEXT:    s_mov_b32 s17, s7
+; GFX6-NEXT:    s_mov_b32 s19, s7
 ; GFX6-NEXT:    s_waitcnt lgkmcnt(0)
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 30
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 28
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 29
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 26
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 24
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 25
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 22
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 23
-; GFX6-NEXT:    s_lshr_b32 s18, s5, 20
-; GFX6-NEXT:    s_lshr_b32 s20, s5, 21
-; GFX6-NEXT:    s_lshr_b32 s14, s5, 18
-; GFX6-NEXT:    s_lshr_b32 s16, s5, 19
-; GFX6-NEXT:    s_lshr_b32 s10, s5, 16
-; GFX6-NEXT:    s_lshr_b32 s12, s5, 17
-; GFX6-NEXT:    s_lshr_b32 s6, s5, 14
-; GFX6-NEXT:    s_lshr_b32 s8, s5, 15
-; GFX6-NEXT:    s_mov_b32 s40, s5
-; GFX6-NEXT:    s_ashr_i32 s7, s5, 31
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[40:41], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v4, s7
-; GFX6-NEXT:    s_lshr_b32 s40, s5, 12
+; GFX6-NEXT:    s_lshr_b32 s6, s5, 30
+; GFX6-NEXT:    s_lshr_b32 s8, s5, 28
+; GFX6-NEXT:    s_lshr_b32 s10, s5, 29
+; GFX6-NEXT:    s_lshr_b32 s12, s5, 26
+; GFX6-NEXT:    s_lshr_b32 s16, s5, 27
+; GFX6-NEXT:    s_mov_b32 s18, s5
+; GFX6-NEXT:    s_bfe_i64 s[14:15], s[4:5], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[44:45], s[18:19], 0x10000
+; GFX6-NEXT:    s_ashr_i32 s18, s5, 31
+; GFX6-NEXT:    s_bfe_i64 s[28:29], s[16:17], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[36:37], s[12:13], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[38:39], s[10:11], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[40:41], s[8:9], 0x10000
+; GFX6-NEXT:    s_bfe_i64 s[42:43], s[6:7], 0x10000
+; GFX6-NEXT:    s_mov_b32 s2, -1
+; GFX6-NEXT:    s_mov_b32 s31, s7
+; GFX6-NEXT:    s_mov_b32 s35, s7
+; GFX6-NEXT:    s_mov_b32 s25, s7
+; GFX6-NEXT:    s_mov_b32 s27, s7
+; GFX6-NEXT:    s_mov_b32 s21, s7
+; GFX6-NEXT:    s_mov_b32 s23, s7
+; GFX6-NEXT:    v_mov_b32_e32 v4, s18
 ; GFX6-NEXT:    v_mov_b32_e32 v0, s44
 ; GFX6-NEXT:    v_mov_b32_e32 v1, s45
-; GFX6-NEXT:    s_bfe_i64 s[44:45], s[4:5], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[42:43], s[42:43], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v6, s44
-; GFX6-NEXT:    v_mov_b32_e32 v7, s45
-; GFX6-NEXT:    s_lshr_b32 s44, s5, 13
+; GFX6-NEXT:    s_mov_b32 s45, s7
+; GFX6-NEXT:    v_mov_b32_e32 v6, s14
+; GFX6-NEXT:    v_mov_b32_e32 v7, s15
+; GFX6-NEXT:    s_mov_b32 s47, s7
 ; GFX6-NEXT:    v_mov_b32_e32 v2, s42
 ; GFX6-NEXT:    v_mov_b32_e32 v3, s43
-; GFX6-NEXT:    s_lshr_b32 s42, s5, 10
-; GFX6-NEXT:    s_bfe_i64 s[36:37], s[36:37], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[38:39], s[38:39], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v8, s36
-; GFX6-NEXT:    v_mov_b32_e32 v9, s37
-; GFX6-NEXT:    s_lshr_b32 s36, s5, 11
+; GFX6-NEXT:    s_mov_b32 s43, s7
+; GFX6-NEXT:    v_mov_b32_e32 v8, s40
+; GFX6-NEXT:    v_mov_b32_e32 v9, s41
+; GFX6-NEXT:    s_mov_b32 s41, s7
 ; GFX6-NEXT:    v_mov_b32_e32 v10, s38
 ; GFX6-NEXT:    v_mov_b32_e32 v11, s39
-; GFX6-NEXT:    s_lshr_b32 s38, s5, 8
-; GFX6-NEXT:    s_bfe_i64 s[30:31], s[30:31], 0x10000
+; GFX6-NEXT:    s_mov_b32 s39, s7
+; GFX6-NEXT:    v_mov_b32_e32 v12, s36
+; GFX6-NEXT:    v_mov_b32_e32 v13, s37
+; GFX6-NEXT:    s_mov_b32 s15, s7
+; GFX6-NEXT:    v_mov_b32_e32 v14, s28
+; GFX6-NEXT:    v_mov_b32_e32 v15, s29
+; GFX6-NEXT:    s_mov_b32 s37, s7
+; GFX6-NEXT:    s_lshr_b32 s30, s5, 24
+; GFX6-NEXT:    s_lshr_b32 s34, s5, 25
 ; GFX6-NEXT:    s_bfe_i64 s[34:35], s[34:35], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v12, s30
-; GFX6-NEXT:    v_mov_b32_e32 v13, s31
-; GFX6-NEXT:    s_lshr_b32 s30, s5, 9
-; GFX6-NEXT:    v_mov_b32_e32 v14, s34
-; GFX6-NEXT:    v_mov_b32_e32 v15, s35
-; GFX6-NEXT:    s_lshr_b32 s34, s5, 6
-; GFX6-NEXT:    s_bfe_i64 s[28:29], s[28:29], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
-; GFX6-NEXT:    v_mov_b32_e32 v5, s7
+; GFX6-NEXT:    s_bfe_i64 s[28:29], s[30:31], 0x10000
+; GFX6-NEXT:    v_mov_b32_e32 v5, s18
 ; GFX6-NEXT:    buffer_store_dwordx4 v[2:5], off, s[0:3], 0 offset:496
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
-; GFX6-NEXT:    v_mov_b32_e32 v2, s26
-; GFX6-NEXT:    v_mov_b32_e32 v3, s27
-; GFX6-NEXT:    s_lshr_b32 s26, s5, 7
-; GFX6-NEXT:    v_mov_b32_e32 v4, s28
-; GFX6-NEXT:    v_mov_b32_e32 v5, s29
-; GFX6-NEXT:    s_lshr_b32 s28, s5, 4
+; GFX6-NEXT:    v_mov_b32_e32 v2, s28
+; GFX6-NEXT:    v_mov_b32_e32 v3, s29
+; GFX6-NEXT:    s_mov_b32 s29, s7
+; GFX6-NEXT:    v_mov_b32_e32 v4, s34
+; GFX6-NEXT:    v_mov_b32_e32 v5, s35
+; GFX6-NEXT:    s_lshr_b32 s24, s5, 22
+; GFX6-NEXT:    s_lshr_b32 s26, s5, 23
+; GFX6-NEXT:    s_bfe_i64 s[26:27], s[26:27], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[24:25], s[24:25], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
 ; GFX6-NEXT:    buffer_store_dwordx4 v[8:11], off, s[0:3], 0 offset:480
 ; GFX6-NEXT:    s_waitcnt expcnt(0)
-; GFX6-NEXT:    v_mov_b32_e32 v8, s22
-; GFX6-NEXT:    v_mov_b32_e32 v9, s23
-; GFX6-NEXT:    s_lshr_b32 s22, s5, 5
-; GFX6-NEXT:    v_mov_b32_e32 v10, s24
-; GFX6-NEXT:    v_mov_b32_e32 v11, s25
-; GFX6-NEXT:    s_lshr_b32 s24, s5, 2
+; GFX6-NEXT:    v_mov_b32_e32 v8, s24
+; GFX6-NEXT:    v_mov_b32_e32 v9, s25
+; GFX6-NEXT:    s_mov_b32 s25, s7
+; GFX6-NEXT:    v_mov_b32_e32 v10, s26
+; GFX6-NEXT:    v_mov_b32_e32 v11, s27
+; GFX6-NEXT:    s_mov_b32 s27, s7
+; GFX6-NEXT:    s_lshr_b32 s20, s5, 20
+; GFX6-NEXT:    s_lshr_b32 s22, s5, 21
+; GFX6-NEXT:    s_bfe_i64 s[22:23], s[22:23], 0x10000
 ; GFX6-NEXT:    s_bfe_i64 s[20:21], s[20:21], 0x10000
-; GFX6-NEXT:    s_bfe_i64 s[18:19], s[18:19], 0x10000
 ; GFX6-...
[truncated]

@bjope
Copy link
Collaborator Author

bjope commented May 8, 2025

Doing a simple fix like this result in lots of lit test diffs. Unfortunately, I guess mostly regressions.

It could be possible to pass around a "MustNotMakeUndemandedEltsMorePoisonous" flag (or bitmask) in the recursive calls to SimplifyDemanded*, and set that to true for the BITCAST cases. It would allow us to do most simplifications even for BITCAST when we know that the bits from some source elements aren't used (although they must not be turned into poison). Such a hack would reduce the amount of lit test diffs and regressions. But we need to make a lot of boiler plate code changes, as well as finding all places where undemanded elements can be simplified as poison.

Another option might be to introduce new DAG combines etc to reduce amount of regressions that way. No idea how complicated that would be.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

DAGCombiner/SimplifyDemandedBits making ISel DAG more poisonous for bitcasted vector
2 participants