Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[MachinePipeliner] Introduce a new class for loop-carried deps #137663

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: users/kasuga-fj/pipeliner-refactor-addloopcarried
Choose a base branch
from

Conversation

kasuga-fj
Copy link
Contributor

In MachinePipeliner, loop-carried memory dependencies are represented by DAG, which makes things complicated and causes some necessary dependencies to be missing. This patch introduces a new class to manage loop-carried memory dependencies to simplify the logic. The ultimate goal is to add currently missing dependencies, but this is a first step of that, and this patch doesn't intend to change current behavior. This patch also adds new tests that show the missed dependencies, which should be fixed in the future.

Split off from #135148

In MachinePipeliner, loop-carried memory dependencies are represented by
DAG, which makes things complicated and causes some necessary
dependencies to be missing. This patch introduces a new class to manage
loop-carried memory dependencies to simplify the logic. The ultimate
goal is to add currently missing dependencies, but this is a first step
of that, and this patch doesn't intend to change current behavior. This
patch also adds new tests that show the missed dependencies, which
should be fixed in the future.

Split off from #135148
@llvmbot
Copy link
Member

llvmbot commented Apr 28, 2025

@llvm/pr-subscribers-backend-aarch64

@llvm/pr-subscribers-backend-hexagon

Author: Ryotaro Kasuga (kasuga-fj)

Changes

In MachinePipeliner, loop-carried memory dependencies are represented by DAG, which makes things complicated and causes some necessary dependencies to be missing. This patch introduces a new class to manage loop-carried memory dependencies to simplify the logic. The ultimate goal is to add currently missing dependencies, but this is a first step of that, and this patch doesn't intend to change current behavior. This patch also adds new tests that show the missed dependencies, which should be fixed in the future.

Split off from #135148


Patch is 45.92 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137663.diff

10 Files Affected:

  • (modified) llvm/include/llvm/CodeGen/MachinePipeliner.h (+33-1)
  • (modified) llvm/lib/CodeGen/MachinePipeliner.cpp (+224-29)
  • (added) llvm/test/CodeGen/AArch64/sms-loop-carried-fp-exceptions1.mir (+109)
  • (added) llvm/test/CodeGen/AArch64/sms-loop-carried-fp-exceptions2.mir (+99)
  • (added) llvm/test/CodeGen/Hexagon/swp-loop-carried-order-dep1.mir (+110)
  • (added) llvm/test/CodeGen/Hexagon/swp-loop-carried-order-dep2.mir (+105)
  • (added) llvm/test/CodeGen/Hexagon/swp-loop-carried-order-dep3.mir (+109)
  • (added) llvm/test/CodeGen/Hexagon/swp-loop-carried-order-dep4.mir (+109)
  • (added) llvm/test/CodeGen/Hexagon/swp-loop-carried-order-dep5.mir (+111)
  • (added) llvm/test/CodeGen/Hexagon/swp-loop-carried-order-dep6.mir (+154)
diff --git a/llvm/include/llvm/CodeGen/MachinePipeliner.h b/llvm/include/llvm/CodeGen/MachinePipeliner.h
index 966ffb7a1fbd2..e4e794c434adb 100644
--- a/llvm/include/llvm/CodeGen/MachinePipeliner.h
+++ b/llvm/include/llvm/CodeGen/MachinePipeliner.h
@@ -190,6 +190,38 @@ class SwingSchedulerDDGEdge {
   bool ignoreDependence(bool IgnoreAnti) const;
 };
 
+/// Represents loop-carried dependencies. Because SwingSchedulerDAG doesn't
+/// assume cycle dependencies as the name suggests, such dependencies must be
+/// handled separately. After DAG construction is finished, these dependencies
+/// are added to SwingSchedulerDDG.
+/// TODO: Also handle output-dependencies introduced by physical registers.
+struct LoopCarriedEdges {
+  using OrderDep = SmallSetVector<SUnit *, 8>;
+  using OrderDepsType = DenseMap<SUnit *, OrderDep>;
+
+  OrderDepsType OrderDeps;
+
+  const OrderDep *getOrderDepOrNull(SUnit *Key) const {
+    auto Ite = OrderDeps.find(Key);
+    if (Ite == OrderDeps.end())
+      return nullptr;
+    return &Ite->second;
+  }
+
+  /// Retruns true if the edge from \p From to \p To is a back-edge that should
+  /// be used when scheduling.
+  bool shouldUseWhenScheduling(const SUnit *From, const SUnit *To) const;
+
+  /// Adds some edges to the original DAG that correspond to loop-carried
+  /// dependencies. Historically, loop-carried edges are represented by using
+  /// non-loop-carried edges in the original DAG. This function appends such
+  /// edges to preserve the previous behavior.
+  void modifySUnits(std::vector<SUnit> &SUnits);
+
+  void dump(SUnit *SU, const TargetRegisterInfo *TRI,
+            const MachineRegisterInfo *MRI) const;
+};
+
 /// Represents dependencies between instructions. This class is a wrapper of
 /// `SUnits` and its dependencies to manipulate back-edges in a natural way.
 /// Currently it only supports back-edges via PHI, which are expressed as
@@ -402,7 +434,7 @@ class SwingSchedulerDAG : public ScheduleDAGInstrs {
                              const MachineInstr *OtherMI) const;
 
 private:
-  void addLoopCarriedDependences();
+  LoopCarriedEdges addLoopCarriedDependences();
   void updatePhiDependences();
   void changeDependences();
   unsigned calculateResMII();
diff --git a/llvm/lib/CodeGen/MachinePipeliner.cpp b/llvm/lib/CodeGen/MachinePipeliner.cpp
index 3d161ffbe40a4..4568af935011a 100644
--- a/llvm/lib/CodeGen/MachinePipeliner.cpp
+++ b/llvm/lib/CodeGen/MachinePipeliner.cpp
@@ -266,6 +266,82 @@ struct SUnitWithMemInfo {
   bool getUnderlyingObjects();
 };
 
+/// Add loop-carried chain dependencies. This class handles the same type of
+/// dependencies added by `ScheduleDAGInstrs::buildSchedGraph`, but takes into
+/// account dependencies across iterations.
+class LoopCarriedOrderDepsTracker {
+  // Type of instruction that is relevant to order-dependencies
+  enum class InstrTag {
+    Barrier = 0,      ///< A barrier event instruction.
+    LoadOrStore = 1,  ///< An instruction that may load or store memory, but is
+                      ///< not a barrier event.
+    FPExceptions = 2, ///< An instruction that does not match above, but may
+                      ///< raise floatin-point exceptions.
+  };
+
+  struct TaggedSUnit : PointerIntPair<SUnit *, 2> {
+    TaggedSUnit(SUnit *SU, InstrTag Tag)
+        : PointerIntPair<SUnit *, 2>(SU, unsigned(Tag)) {}
+
+    InstrTag getTag() const { return InstrTag(getInt()); }
+  };
+
+  /// Holds loads and stores with memory related information.
+  struct LoadStoreChunk {
+    SmallVector<SUnitWithMemInfo, 4> Loads;
+    SmallVector<SUnitWithMemInfo, 4> Stores;
+
+    void append(SUnit *SU);
+  };
+
+  SwingSchedulerDAG *DAG;
+  BatchAAResults *BAA;
+  std::vector<SUnit> &SUnits;
+
+  /// The size of SUnits, for convenience.
+  const unsigned N;
+
+  /// Loop-carried Edges.
+  std::vector<BitVector> LoopCarried;
+
+  /// Instructions related to chain dependencies. They are one of the
+  /// following:
+  ///
+  ///  1. Barrier event.
+  ///  2. Load, but neither a barrier event, invariant load, nor may load trap
+  ///     value.
+  ///  3. Store, but not a barrier event.
+  ///  4. None of them, but may raise floating-point exceptions.
+  ///
+  /// This is used when analyzing loop-carried dependencies that access global
+  /// barrier instructions.
+  std::vector<TaggedSUnit> TaggedSUnits;
+
+  const TargetInstrInfo *TII = nullptr;
+  const TargetRegisterInfo *TRI = nullptr;
+
+public:
+  LoopCarriedOrderDepsTracker(SwingSchedulerDAG *SSD, BatchAAResults *BAA,
+                              const TargetInstrInfo *TII,
+                              const TargetRegisterInfo *TRI);
+
+  /// The main function to compute loop-carried order-dependencies.
+  void computeDependencies();
+
+  const BitVector &getLoopCarried(unsigned Idx) const {
+    return LoopCarried[Idx];
+  }
+
+private:
+  /// Tags to \p SU if the instruction may affect the order-dependencies.
+  std::optional<TaggedSUnit> checkInstrType(SUnit *SU) const;
+
+  void addLoopCarriedDepenenciesForChunks(const LoadStoreChunk &From,
+                                          const LoadStoreChunk &To);
+
+  void computeDependenciesAux();
+};
+
 } // end anonymous namespace
 
 /// The "main" function for implementing Swing Modulo Scheduling.
@@ -593,13 +669,19 @@ void SwingSchedulerDAG::setMAX_II() {
 /// scheduling part of the Swing Modulo Scheduling algorithm.
 void SwingSchedulerDAG::schedule() {
   buildSchedGraph(AA);
-  addLoopCarriedDependences();
+  const LoopCarriedEdges LCE = addLoopCarriedDependences();
   updatePhiDependences();
   Topo.InitDAGTopologicalSorting();
   changeDependences();
   postProcessDAG();
   DDG = std::make_unique<SwingSchedulerDDG>(SUnits, &EntrySU, &ExitSU);
-  LLVM_DEBUG(dump());
+  LLVM_DEBUG({
+    dump();
+    dbgs() << "===== Loop Carried Edges Begin =====\n";
+    for (SUnit &SU : SUnits)
+      LCE.dump(&SU, TRI, &MRI);
+    dbgs() << "===== Loop Carried Edges End =====\n";
+  });
 
   NodeSetType NodeSets;
   findCircuits(NodeSets);
@@ -832,15 +914,6 @@ static bool isSuccOrder(SUnit *SUa, SUnit *SUb) {
   return false;
 }
 
-/// Return true if the instruction causes a chain between memory
-/// references before and after it.
-static bool isDependenceBarrier(MachineInstr &MI) {
-  return MI.isCall() || MI.mayRaiseFPException() ||
-         MI.hasUnmodeledSideEffects() ||
-         (MI.hasOrderedMemoryRef() &&
-          (!MI.mayLoad() || !MI.isDereferenceableInvariantLoad()));
-}
-
 SUnitWithMemInfo::SUnitWithMemInfo(SUnit *SU) : SU(SU) {
   if (!getUnderlyingObjects())
     return;
@@ -941,28 +1014,116 @@ static bool hasLoopCarriedMemDep(const SUnitWithMemInfo &Src,
   return false;
 }
 
+void LoopCarriedOrderDepsTracker::LoadStoreChunk::append(SUnit *SU) {
+  const MachineInstr *MI = SU->getInstr();
+  if (!MI->mayLoadOrStore())
+    return;
+  (MI->mayStore() ? Stores : Loads).emplace_back(SU);
+}
+
+LoopCarriedOrderDepsTracker::LoopCarriedOrderDepsTracker(
+    SwingSchedulerDAG *SSD, BatchAAResults *BAA, const TargetInstrInfo *TII,
+    const TargetRegisterInfo *TRI)
+    : DAG(SSD), BAA(BAA), SUnits(DAG->SUnits), N(SUnits.size()),
+      LoopCarried(N, BitVector(N)), TII(TII), TRI(TRI) {}
+
+void LoopCarriedOrderDepsTracker::computeDependencies() {
+  // Traverse all instructions and extract only what we are targetting.
+  for (auto &SU : SUnits) {
+    auto Tagged = checkInstrType(&SU);
+
+    // This instruction has no loop-carried order-dependencies.
+    if (!Tagged)
+      continue;
+    TaggedSUnits.push_back(*Tagged);
+  }
+
+  computeDependenciesAux();
+
+  LLVM_DEBUG({
+    for (unsigned I = 0; I != N; I++)
+      assert(!LoopCarried[I].test(I) && "Unexpected self-loop");
+  });
+}
+
+std::optional<LoopCarriedOrderDepsTracker::TaggedSUnit>
+LoopCarriedOrderDepsTracker::checkInstrType(SUnit *SU) const {
+  MachineInstr *MI = SU->getInstr();
+  if (TII->isGlobalMemoryObject(MI))
+    return TaggedSUnit(SU, InstrTag::Barrier);
+
+  if (MI->mayStore() ||
+      (MI->mayLoad() && !MI->isDereferenceableInvariantLoad()))
+    return TaggedSUnit(SU, InstrTag::LoadOrStore);
+
+  if (MI->mayRaiseFPException())
+    return TaggedSUnit(SU, InstrTag::FPExceptions);
+
+  return std::nullopt;
+}
+
+void LoopCarriedOrderDepsTracker::addLoopCarriedDepenenciesForChunks(
+    const LoadStoreChunk &From, const LoadStoreChunk &To) {
+  // Add dependencies for load-to-store (WAR) from top to bottom.
+  for (const SUnitWithMemInfo &Src : From.Loads)
+    for (const SUnitWithMemInfo &Dst : To.Stores)
+      if (Src.SU->NodeNum < Dst.SU->NodeNum &&
+          hasLoopCarriedMemDep(Src, Dst, *BAA, TII, TRI))
+        LoopCarried[Src.SU->NodeNum].set(Dst.SU->NodeNum);
+
+  // TODO: The following dependencies are missed.
+  //
+  // - Dependencies for load-to-store from bottom to top.
+  // - Dependencies for store-to-load (RAW).
+  // - Dependencies for store-to-store (WAW).
+}
+
+void LoopCarriedOrderDepsTracker::computeDependenciesAux() {
+  SmallVector<LoadStoreChunk, 2> Chunks(1);
+  for (const auto &TSU : TaggedSUnits) {
+    InstrTag Tag = TSU.getTag();
+    SUnit *SU = TSU.getPointer();
+    switch (Tag) {
+    case InstrTag::Barrier:
+      Chunks.emplace_back();
+      break;
+    case InstrTag::LoadOrStore:
+      Chunks.back().append(SU);
+      break;
+    case InstrTag::FPExceptions:
+      // TODO: Handle this properly.
+      break;
+    }
+  }
+
+  // Add dependencies between memory operations. If there are one or more
+  // barrier events between two memory instructions, we don't add a
+  // loop-carried dependence for them.
+  for (const LoadStoreChunk &Chunk : Chunks)
+    addLoopCarriedDepenenciesForChunks(Chunk, Chunk);
+
+  // TODO: If there are multiple barrier instructions, dependencies from the
+  // last barrier instruction (or load/store below it) to the first barrier
+  // instruction (or load/store above it).
+}
+
 /// Add a chain edge between a load and store if the store can be an
 /// alias of the load on a subsequent iteration, i.e., a loop carried
 /// dependence. This code is very similar to the code in ScheduleDAGInstrs
 /// but that code doesn't create loop carried dependences.
-void SwingSchedulerDAG::addLoopCarriedDependences() {
-  SmallVector<SUnitWithMemInfo, 4> PendingLoads;
-  for (auto &SU : SUnits) {
-    MachineInstr &MI = *SU.getInstr();
-    if (isDependenceBarrier(MI))
-      PendingLoads.clear();
-    else if (MI.mayLoad()) {
-      PendingLoads.emplace_back(&SU);
-    } else if (MI.mayStore()) {
-      SUnitWithMemInfo Store(&SU);
-      for (const SUnitWithMemInfo &Load : PendingLoads)
-        if (hasLoopCarriedMemDep(Load, Store, BAA, TII, TRI)) {
-          SDep Dep(Load.SU, SDep::Barrier);
-          Dep.setLatency(1);
-          SU.addPred(Dep);
-        }
-    }
-  }
+/// TODO: Also compute output-dependencies.
+LoopCarriedEdges SwingSchedulerDAG::addLoopCarriedDependences() {
+  LoopCarriedEdges LCE;
+
+  // Add loop-carried order-dependencies
+  LoopCarriedOrderDepsTracker LCODTracker(this, &BAA, TII, TRI);
+  LCODTracker.computeDependencies();
+  for (unsigned I = 0; I != SUnits.size(); I++)
+    for (const int Succ : LCODTracker.getLoopCarried(I).set_bits())
+      LCE.OrderDeps[&SUnits[I]].insert(&SUnits[Succ]);
+
+  LCE.modifySUnits(SUnits);
+  return LCE;
 }
 
 /// Update the phi dependences to the DAG because ScheduleDAGInstrs no longer
@@ -4002,3 +4163,37 @@ const SwingSchedulerDDG::EdgesType &
 SwingSchedulerDDG::getOutEdges(const SUnit *SU) const {
   return getEdges(SU).Succs;
 }
+
+void LoopCarriedEdges::modifySUnits(std::vector<SUnit> &SUnits) {
+  // Currently this function simply adds all dependencies represented by this
+  // object. After we properly handle missed dependencies, the logic here will
+  // be more complex, as currently missed edges should not be added to the DAG.
+  for (SUnit &SU : SUnits) {
+    SUnit *Src = &SU;
+    if (auto *OrderDep = getOrderDepOrNull(Src)) {
+      SDep Dep(Src, SDep::Barrier);
+      Dep.setLatency(1);
+      for (SUnit *Dst : *OrderDep)
+        Dst->addPred(Dep);
+    }
+  }
+}
+
+void LoopCarriedEdges::dump(SUnit *SU, const TargetRegisterInfo *TRI,
+                            const MachineRegisterInfo *MRI) const {
+  const auto *Order = getOrderDepOrNull(SU);
+
+  if (!Order)
+    return;
+
+  const auto DumpSU = [](const SUnit *SU) {
+    std::ostringstream OSS;
+    OSS << "SU(" << SU->NodeNum << ")";
+    return OSS.str();
+  };
+
+  dbgs() << "  Loop carried edges from " << DumpSU(SU) << "\n"
+         << "    Order\n";
+  for (SUnit *Dst : *Order)
+    dbgs() << "      " << DumpSU(Dst) << "\n";
+}
diff --git a/llvm/test/CodeGen/AArch64/sms-loop-carried-fp-exceptions1.mir b/llvm/test/CodeGen/AArch64/sms-loop-carried-fp-exceptions1.mir
new file mode 100644
index 0000000000000..bcc6a3ea9b285
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/sms-loop-carried-fp-exceptions1.mir
@@ -0,0 +1,109 @@
+# RUN: llc -mtriple=aarch64 -run-pass=pipeliner -debug-only=pipeliner -aarch64-enable-pipeliner -pipeliner-mve-cg %s -o /dev/null 2>&1 | FileCheck %s
+# REQUIRES: asserts
+
+# Test a case where fenv is enabled, there are instructions that may raise a
+# floating-point exception, and there is an instruction for barrier event. In
+# this case the order of them must not change.
+#
+# FIXME: Currently the following dependencies are missed.
+#
+# Loop carried edges from SU(7)
+#   Order
+#     SU(2)
+#     SU(3)
+#     SU(4)
+#     SU(5)
+
+# CHECK:      ===== Loop Carried Edges Begin =====
+# CHECK-NEXT: ===== Loop Carried Edges End =====
+
+--- |
+  @x = dso_local global i32 0, align 4
+
+  define dso_local void @f(ptr nocapture noundef writeonly %a, float noundef %y, i32 noundef %n) {
+  entry:
+    %cmp6 = icmp sgt i32 %n, 0
+    br i1 %cmp6, label %for.body.preheader, label %for.cond.cleanup
+
+  for.body.preheader:
+    %wide.trip.count = zext nneg i32 %n to i64
+    br label %for.body
+
+  for.cond.cleanup:
+    ret void
+
+  for.body:
+    %indvars.iv = phi i64 [ 0, %for.body.preheader ], [ %indvars.iv.next, %for.body ]
+    %tmp9 = trunc i64 %indvars.iv to i32
+    %conv = tail call float @llvm.experimental.constrained.sitofp.f32.i32(i32 %tmp9, metadata !"round.dynamic", metadata !"fpexcept.strict") #2
+    %add = tail call float @llvm.experimental.constrained.fadd.f32(float %conv, float %y, metadata !"round.dynamic", metadata !"fpexcept.strict") #2
+    %0 = shl nuw nsw i64 %indvars.iv, 2
+    %scevgep = getelementptr i8, ptr %a, i64 %0
+    store float %add, ptr %scevgep, align 4, !tbaa !6
+    %1 = load volatile i32, ptr @x, align 4, !tbaa !10
+    %2 = zext i32 %1 to i64
+    %3 = add i64 %indvars.iv, %2
+    %tmp = trunc i64 %3 to i32
+    store volatile i32 %tmp, ptr @x, align 4, !tbaa !10
+    %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
+    %exitcond.not = icmp eq i64 %wide.trip.count, %indvars.iv.next
+    br i1 %exitcond.not, label %for.cond.cleanup, label %for.body
+  }
+
+  declare float @llvm.experimental.constrained.sitofp.f32.i32(i32, metadata, metadata)
+
+  declare float @llvm.experimental.constrained.fadd.f32(float, float, metadata, metadata)
+
+  attributes #2 = { strictfp }
+
+  !6 = !{!7, !7, i64 0}
+  !7 = !{!"float", !8, i64 0}
+  !8 = !{!"omnipotent char", !9, i64 0}
+  !9 = !{!"Simple C/C++ TBAA"}
+  !10 = !{!11, !11, i64 0}
+  !11 = !{!"int", !8, i64 0}
+
+...
+---
+name:            f
+tracksRegLiveness: true
+body:             |
+  bb.0.entry:
+    successors: %bb.1, %bb.2
+    liveins: $x0, $s0, $w1
+  
+    %5:gpr32common = COPY $w1
+    %4:fpr32 = COPY $s0
+    %3:gpr64common = COPY $x0
+    dead $wzr = SUBSWri %5, 1, 0, implicit-def $nzcv
+    Bcc 11, %bb.2, implicit $nzcv
+    B %bb.1
+  
+  bb.1.for.body.preheader:
+    %8:gpr32 = ORRWrs $wzr, %5, 0
+    %0:gpr64 = SUBREG_TO_REG 0, killed %8, %subreg.sub_32
+    %9:gpr64all = COPY $xzr
+    %7:gpr64all = COPY %9
+    %13:gpr64common = ADRP target-flags(aarch64-page) @x
+    B %bb.3
+  
+  bb.2.for.cond.cleanup:
+    RET_ReallyLR
+  
+  bb.3.for.body:
+    successors: %bb.2, %bb.3
+  
+    %1:gpr64common = PHI %7, %bb.1, %2, %bb.3
+    %10:gpr32 = COPY %1.sub_32
+    %11:fpr32 = SCVTFUWSri %10, implicit $fpcr
+    %12:fpr32 = FADDSrr killed %11, %4, implicit $fpcr
+    STRSroX killed %12, %3, %1, 0, 1 :: (store (s32) into %ir.scevgep, !tbaa !6)
+    %14:gpr32 = LDRWui %13, target-flags(aarch64-pageoff, aarch64-nc) @x :: (volatile dereferenceable load (s32) from @x, !tbaa !10)
+    %15:gpr32 = ADDWrr %10, killed %14
+    STRWui killed %15, %13, target-flags(aarch64-pageoff, aarch64-nc) @x :: (volatile store (s32) into @x, !tbaa !10)
+    %16:gpr64common = nuw nsw ADDXri %1, 1, 0
+    %2:gpr64all = COPY %16
+    dead $xzr = SUBSXrr %0, %16, implicit-def $nzcv
+    Bcc 0, %bb.2, implicit $nzcv
+    B %bb.3
+...
diff --git a/llvm/test/CodeGen/AArch64/sms-loop-carried-fp-exceptions2.mir b/llvm/test/CodeGen/AArch64/sms-loop-carried-fp-exceptions2.mir
new file mode 100644
index 0000000000000..6116f15811ec7
--- /dev/null
+++ b/llvm/test/CodeGen/AArch64/sms-loop-carried-fp-exceptions2.mir
@@ -0,0 +1,99 @@
+# RUN: llc -mtriple=aarch64 -run-pass=pipeliner -debug-only=pipeliner -aarch64-enable-pipeliner -pipeliner-mve-cg %s -o /dev/null 2>&1 | FileCheck %s
+# REQUIRES: asserts
+
+# Test a case where fenv is enabled, there are instructions that may raise a
+# floatin-point exception, but there is no instruction for barrier event. In
+# this case no loop-carried dependencies are necessary.
+
+# CHECK:      ===== Loop Carried Edges Begin =====
+# CHECK-NEXT: ===== Loop Carried Edges End =====
+
+--- |
+  define dso_local float @f(ptr nocapture noundef writeonly %a, float noundef %y, i32 noundef %n) local_unnamed_addr {
+  entry:
+    %conv = tail call float @llvm.experimental.constrained.fptrunc.f32.f64(double 1.000000e+00, metadata !"round.dynamic", metadata !"fpexcept.strict")
+    %cmp8 = icmp sgt i32 %n, 0
+    br i1 %cmp8, label %for.body.preheader, label %for.cond.cleanup
+
+  for.body.preheader:
+    %wide.trip.count = zext nneg i32 %n to i64
+    br label %for.body
+
+  for.cond.cleanup:
+    %acc.0.lcssa = phi float [ %conv, %entry ], [ %mul, %for.body ]
+    ret float %acc.0.lcssa
+
+  for.body:
+    %indvars.iv = phi i64 [ 0, %for.body.preheader ], [ %indvars.iv.next, %for.body ]
+    %acc.010 = phi float [ %conv, %for.body.preheader ], [ %mul, %for.body ]
+    %tmp = trunc i64 %indvars.iv to i32
+    %conv2 = tail call float @llvm.experimental.constrained.sitofp.f32.i32(i32 %tmp, metadata !"round.dynamic", metadata !"fpexcept.strict")
+    %add = tail call float @llvm.experimental.constrained.fadd.f32(float %conv2, float %y, metadata !"round.dynamic", metadata !"fpexcept.strict")
+    %mul = tail call float @llvm.experimental.constrained.fmul.f32(float %acc.010, float %add, metadata !"round.dynamic", metadata !"fpexcept.strict")
+    %0 = shl nuw nsw i64 %indvars.iv, 2
+    %scevgep = getelementptr i8, ptr %a, i64 %0
+    store float %add, ptr %scevgep, align 4, !tbaa !6
+    %indvars.iv.next = add nuw nsw i64 %indvars.iv, 1
+    %exitcond.not = icmp eq i64 %wide.trip.count, %indvars.iv.next
+    br i1 %exitcond.not, label %for.cond.cleanup, label %for.body
+  }
+
+  declare float @llvm.experimental.constrained.fptrunc.f32.f64(double, metadata, metadata)
+
+  declare float @llvm.experimental.constrained.sitofp.f32.i32(i32, metadata, metadata)
+
+  declare float @llvm.experimental.constrained.fadd.f32(float, float, metadata, metadata)
+
+  declare float @llvm.experimental.constrained.fmul.f32(float, float, metadata, metadata)
+
+  !6 = !{!7, !7, i64 0}
+  !7 = !{!"float", !8, i64 0}
+  !8 = !{!"omnipotent char", !9, i64 0}
+  !9 = !{!"Simple C/C++ TBAA"}
+
+...
+---
+name:            f
+tracksRegLiveness: true
+body:             |
+  bb.0.entry:
+    successors: %bb.1, %bb.2
+    liveins: $x0, $s0, $w1
+  
+    %9:gpr32common = COPY $w1
+    %8:fpr32 = COPY $s0
+    %7:gpr64common = COPY $x0
+    %10:fpr64 = FMOVDi 112
+    %0:fpr32 = FCVTSDr killed %10, implicit $fpcr
+    dead $wzr = SUBSWri %9, 1, 0, implicit-def $nzcv
+    Bcc 11, %bb.2, implicit $nzcv
+    B %bb.1
+  
+  bb.1.for.body.preheader:
+    %13:gpr32 = ORRWrs $wzr, %9, 0
+    %1:gpr64 = SUBREG_TO_REG 0, killed %13, %subreg.sub_32
+    %14:gpr64all = COPY $xzr
+    %12:gpr64all = COPY %14
+    B %bb.3
+  
+  bb.2.for.cond.cleanup:
+    %2:fpr32 = PHI %0, %bb.0, %5, %bb.3
+    $s0 = COPY %2
+    RET_ReallyLR implicit $s0
+  
+  bb.3.for.body:
+    ...
[truncated]

@kasuga-fj kasuga-fj requested review from aankit-ca and iajbar April 28, 2025 16:17
@aankit-ca
Copy link
Contributor

@kasuga-fj Should this PR be against the main branch?

@kasuga-fj
Copy link
Contributor Author

This PR is a part of Stacked Pull Requests and depends on #137662. The target branch will automatically change to main after the dependent PR is merged. Could you please take a look at #137662 at first?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants