Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[RISCV] Handle more (add x, C) -> (sub x, -C) cases #138705

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

pfusik
Copy link
Contributor

@pfusik pfusik commented May 6, 2025

This is a follow-up to #137309, adding:

  • multi-use of the constant with different adds
  • vectors (vadd.vx -> vsub.vx)

This is a follow-up to llvm#137309, adding:
- multi-use of the constant with different adds
- vectors (vadd.vx -> vsub.vx)
Copy link

github-actions bot commented May 6, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

let Predicates = [HasVInstructionsI64] in {
def : Pat<(add (vti.Vector vti.RegClass:$rs1),
(vti.Vector (SplatPat (i64 negImm:$rs2)))),
// (riscv_vmv_v_x_vl undef, negImm:$rs2, srcvalue)),
Copy link
Contributor Author

@pfusik pfusik May 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't match the vadd_vx_imm64_to_sub test yet (neither with SplatPat nor riscv_vmv_v_x_vl).
I'd appreciate hints for how to debug. Or maybe you can see why it isn't working?
Perhaps I need to add SplatPat_simmXXX ? If so, why?

@llvmbot
Copy link
Member

llvmbot commented May 7, 2025

@llvm/pr-subscribers-backend-risc-v

Author: Piotr Fusik (pfusik)

Changes

This is a follow-up to #137309, adding:

  • multi-use of the constant with different adds
  • vectors (vadd.vx -> vsub.vx)

Full diff: https://github.com/llvm/llvm-project/pull/138705.diff

6 Files Affected:

  • (modified) llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp (+6-1)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td (+30)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfoVSDPatterns.td (+13)
  • (modified) llvm/lib/Target/RISCV/RISCVInstrInfoVVLPatterns.td (+13)
  • (modified) llvm/test/CodeGen/RISCV/add-imm64-to-sub.ll (+16-1)
  • (modified) llvm/test/CodeGen/RISCV/rvv/vadd-sdnode.ll (+54)
diff --git a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
index 86bdb4c7fd24c..e250a7d432218 100644
--- a/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
+++ b/llvm/lib/Target/RISCV/RISCVISelDAGToDAG.cpp
@@ -3207,11 +3207,16 @@ bool RISCVDAGToDAGISel::selectSHXADD_UWOp(SDValue N, unsigned ShAmt,
 }
 
 bool RISCVDAGToDAGISel::selectNegImm(SDValue N, SDValue &Val) {
-  if (!isa<ConstantSDNode>(N) || !N.hasOneUse())
+  if (!isa<ConstantSDNode>(N))
     return false;
   int64_t Imm = cast<ConstantSDNode>(N)->getSExtValue();
   if (isInt<32>(Imm))
     return false;
+
+  if (any_of(N->users(),
+             [](const SDNode *U) { return U->getOpcode() != ISD::ADD; }))
+    return false;
+
   int OrigImmCost = RISCVMatInt::getIntMatCost(APInt(64, Imm), 64, *Subtarget,
                                                /*CompressionCost=*/true);
   int NegImmCost = RISCVMatInt::getIntMatCost(APInt(64, -Imm), 64, *Subtarget,
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td b/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
index 5edcfdf2654a4..44e7dfa22c378 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoVPseudos.td
@@ -6228,6 +6228,36 @@ foreach vti = AllIntegerVectors in {
   }
 }
 
+// (add v, C) -> (sub v, -C) if -C cheaper to materialize
+defvar I64IntegerVectors = !filter(vti, AllIntegerVectors, !eq(vti.SEW, 64));
+foreach vti = I64IntegerVectors in {
+  let Predicates = [HasVInstructionsI64] in {
+    def : Pat<(vti.Vector (int_riscv_vadd (vti.Vector vti.RegClass:$passthru),
+                                          (vti.Vector vti.RegClass:$rs1),
+                                          (i64 negImm:$rs2),
+                                          VLOpFrag)),
+              (!cast<Instruction>("PseudoVSUB_VX_"#vti.LMul.MX)
+                                             vti.RegClass:$passthru,
+                                             vti.RegClass:$rs1,
+                                             negImm:$rs2,
+                                             GPR:$vl, vti.Log2SEW, TU_MU)>;
+    def : Pat<(vti.Vector (int_riscv_vadd_mask (vti.Vector vti.RegClass:$passthru),
+                                               (vti.Vector vti.RegClass:$rs1),
+                                               (i64 negImm:$rs2),
+                                               (vti.Mask VMV0:$vm),
+                                               VLOpFrag,
+                                               (i64 timm:$policy))),
+              (!cast<Instruction>("PseudoVSUB_VX_"#vti.LMul.MX#"_MASK")
+                                             vti.RegClass:$passthru,
+                                             vti.RegClass:$rs1,
+                                             negImm:$rs2,
+                                             (vti.Mask VMV0:$vm),
+                                             GPR:$vl,
+                                             vti.Log2SEW,
+                                             (i64 timm:$policy))>;
+  }
+}
+
 //===----------------------------------------------------------------------===//
 // 11.2. Vector Widening Integer Add/Subtract
 //===----------------------------------------------------------------------===//
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoVSDPatterns.td b/llvm/lib/Target/RISCV/RISCVInstrInfoVSDPatterns.td
index 93228f2a9e167..c083ac6f57643 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoVSDPatterns.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoVSDPatterns.td
@@ -907,6 +907,19 @@ foreach vti = AllIntegerVectors in {
   }
 }
 
+// (add v, C) -> (sub v, -C) if -C cheaper to materialize
+foreach vti = I64IntegerVectors in {
+  let Predicates = [HasVInstructionsI64] in {
+    def : Pat<(add (vti.Vector vti.RegClass:$rs1),
+                   (vti.Vector (SplatPat (i64 negImm:$rs2)))),
+//                 (riscv_vmv_v_x_vl undef, negImm:$rs2, srcvalue)),
+              (!cast<Instruction>("PseudoVSUB_VX_"#vti.LMul.MX)
+                   (vti.Vector (IMPLICIT_DEF)),
+                   vti.RegClass:$rs1,
+                   negImm:$rs2, vti.AVL, vti.Log2SEW, TA_MA)>;
+  }
+}
+
 // 11.2. Vector Widening Integer Add and Subtract
 defm : VPatWidenBinarySDNode_VV_VX_WV_WX<add, sext_oneuse, "PseudoVWADD">;
 defm : VPatWidenBinarySDNode_VV_VX_WV_WX<add, zext_oneuse, "PseudoVWADDU">;
diff --git a/llvm/lib/Target/RISCV/RISCVInstrInfoVVLPatterns.td b/llvm/lib/Target/RISCV/RISCVInstrInfoVVLPatterns.td
index 2b0b31c79c7a7..5975bcd2a323b 100644
--- a/llvm/lib/Target/RISCV/RISCVInstrInfoVVLPatterns.td
+++ b/llvm/lib/Target/RISCV/RISCVInstrInfoVVLPatterns.td
@@ -1957,6 +1957,19 @@ foreach vti = AllIntegerVectors in {
   }
 }
 
+// (add v, C) -> (sub v, -C) if -C cheaper to materialize
+foreach vti = I64IntegerVectors in {
+  let Predicates = [HasVInstructionsI64] in {
+    def : Pat<(riscv_add_vl (vti.Vector vti.RegClass:$rs1),
+                            (vti.Vector (SplatPat (i64 negImm:$rs2))),
+                            vti.RegClass:$passthru, (vti.Mask VMV0:$vm), VLOpFrag),
+              (!cast<Instruction>("PseudoVSUB_VX_"#vti.LMul.MX#"_MASK")
+                   vti.RegClass:$passthru, vti.RegClass:$rs1,
+                   negImm:$rs2, (vti.Mask VMV0:$vm),
+                   GPR:$vl, vti.Log2SEW, TAIL_AGNOSTIC)>;
+  }
+}
+
 // 11.2. Vector Widening Integer Add/Subtract
 defm : VPatBinaryWVL_VV_VX_WV_WX<riscv_vwadd_vl,  riscv_vwadd_w_vl,  "PseudoVWADD">;
 defm : VPatBinaryWVL_VV_VX_WV_WX<riscv_vwaddu_vl, riscv_vwaddu_w_vl, "PseudoVWADDU">;
diff --git a/llvm/test/CodeGen/RISCV/add-imm64-to-sub.ll b/llvm/test/CodeGen/RISCV/add-imm64-to-sub.ll
index ddcf4e1a8aa77..3c02efbfe02f9 100644
--- a/llvm/test/CodeGen/RISCV/add-imm64-to-sub.ll
+++ b/llvm/test/CodeGen/RISCV/add-imm64-to-sub.ll
@@ -56,6 +56,21 @@ define i64 @add_multiuse(i64 %x) {
 ; CHECK-NEXT:    and a0, a0, a1
 ; CHECK-NEXT:    ret
   %add = add i64 %x, -1099511627775
-  %xor = and i64 %add, -1099511627775
+  %and = and i64 %add, -1099511627775
+  ret i64 %and
+}
+
+define i64 @add_multiuse_const(i64 %x, i64 %y) {
+; CHECK-LABEL: add_multiuse_const:
+; CHECK:       # %bb.0:
+; CHECK-NEXT:    li a2, -1
+; CHECK-NEXT:    srli a2, a2, 24
+; CHECK-NEXT:    sub a0, a0, a2
+; CHECK-NEXT:    sub a1, a1, a2
+; CHECK-NEXT:    xor a0, a0, a1
+; CHECK-NEXT:    ret
+  %a = add i64 %x, -1099511627775
+  %b = add i64 %y, -1099511627775
+  %xor = xor i64 %a, %b
   ret i64 %xor
 }
diff --git a/llvm/test/CodeGen/RISCV/rvv/vadd-sdnode.ll b/llvm/test/CodeGen/RISCV/rvv/vadd-sdnode.ll
index ac22e11d30cdc..a95ad7f744af3 100644
--- a/llvm/test/CodeGen/RISCV/rvv/vadd-sdnode.ll
+++ b/llvm/test/CodeGen/RISCV/rvv/vadd-sdnode.ll
@@ -865,3 +865,57 @@ define <vscale x 8 x i32> @vadd_vv_mask_negative1_nxv8i32(<vscale x 8 x i32> %va
   %vd = add <vscale x 8 x i32> %vc, %vs
   ret <vscale x 8 x i32> %vd
 }
+
+define <vscale x 1 x i64> @vadd_vx_imm64_to_sub(<vscale x 1 x i64> %va) nounwind {
+; RV32-LABEL: vadd_vx_imm64_to_sub:
+; RV32:       # %bb.0:
+; RV32-NEXT:    addi sp, sp, -16
+; RV32-NEXT:    li a0, -256
+; RV32-NEXT:    li a1, 1
+; RV32-NEXT:    sw a1, 8(sp)
+; RV32-NEXT:    sw a0, 12(sp)
+; RV32-NEXT:    addi a0, sp, 8
+; RV32-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
+; RV32-NEXT:    vlse64.v v9, (a0), zero
+; RV32-NEXT:    vadd.vv v8, v8, v9
+; RV32-NEXT:    addi sp, sp, 16
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vadd_vx_imm64_to_sub:
+; RV64:       # %bb.0:
+; RV64-NEXT:    li a0, -1
+; RV64-NEXT:    slli a0, a0, 40
+; RV64-NEXT:    addi a0, a0, 1
+; RV64-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
+; RV64-NEXT:    vadd.vx v8, v8, a0
+; RV64-NEXT:    ret
+  %vc = add <vscale x 1 x i64> splat (i64 -1099511627775), %va
+  ret <vscale x 1 x i64> %vc
+}
+
+define <vscale x 1 x i64> @vadd_vx_imm64_to_sub_swapped(<vscale x 1 x i64> %va) nounwind {
+; RV32-LABEL: vadd_vx_imm64_to_sub_swapped:
+; RV32:       # %bb.0:
+; RV32-NEXT:    addi sp, sp, -16
+; RV32-NEXT:    li a0, -256
+; RV32-NEXT:    li a1, 1
+; RV32-NEXT:    sw a1, 8(sp)
+; RV32-NEXT:    sw a0, 12(sp)
+; RV32-NEXT:    addi a0, sp, 8
+; RV32-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
+; RV32-NEXT:    vlse64.v v9, (a0), zero
+; RV32-NEXT:    vadd.vv v8, v8, v9
+; RV32-NEXT:    addi sp, sp, 16
+; RV32-NEXT:    ret
+;
+; RV64-LABEL: vadd_vx_imm64_to_sub_swapped:
+; RV64:       # %bb.0:
+; RV64-NEXT:    li a0, -1
+; RV64-NEXT:    slli a0, a0, 40
+; RV64-NEXT:    addi a0, a0, 1
+; RV64-NEXT:    vsetvli a1, zero, e64, m1, ta, ma
+; RV64-NEXT:    vadd.vx v8, v8, a0
+; RV64-NEXT:    ret
+  %vc = add <vscale x 1 x i64> %va, splat (i64 -1099511627775)
+  ret <vscale x 1 x i64> %vc
+}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants