Thanks to visit codestin.com
Credit goes to llvm.org

LLVM 22.0.0git
SeparateConstOffsetFromGEP.cpp
Go to the documentation of this file.
1//===- SeparateConstOffsetFromGEP.cpp -------------------------------------===//
2//
3// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
4// See https://llvm.org/LICENSE.txt for license information.
5// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
6//
7//===----------------------------------------------------------------------===//
8//
9// Loop unrolling may create many similar GEPs for array accesses.
10// e.g., a 2-level loop
11//
12// float a[32][32]; // global variable
13//
14// for (int i = 0; i < 2; ++i) {
15// for (int j = 0; j < 2; ++j) {
16// ...
17// ... = a[x + i][y + j];
18// ...
19// }
20// }
21//
22// will probably be unrolled to:
23//
24// gep %a, 0, %x, %y; load
25// gep %a, 0, %x, %y + 1; load
26// gep %a, 0, %x + 1, %y; load
27// gep %a, 0, %x + 1, %y + 1; load
28//
29// LLVM's GVN does not use partial redundancy elimination yet, and is thus
30// unable to reuse (gep %a, 0, %x, %y). As a result, this misoptimization incurs
31// significant slowdown in targets with limited addressing modes. For instance,
32// because the PTX target does not support the reg+reg addressing mode, the
33// NVPTX backend emits PTX code that literally computes the pointer address of
34// each GEP, wasting tons of registers. It emits the following PTX for the
35// first load and similar PTX for other loads.
36//
37// mov.u32 %r1, %x;
38// mov.u32 %r2, %y;
39// mul.wide.u32 %rl2, %r1, 128;
40// mov.u64 %rl3, a;
41// add.s64 %rl4, %rl3, %rl2;
42// mul.wide.u32 %rl5, %r2, 4;
43// add.s64 %rl6, %rl4, %rl5;
44// ld.global.f32 %f1, [%rl6];
45//
46// To reduce the register pressure, the optimization implemented in this file
47// merges the common part of a group of GEPs, so we can compute each pointer
48// address by adding a simple offset to the common part, saving many registers.
49//
50// It works by splitting each GEP into a variadic base and a constant offset.
51// The variadic base can be computed once and reused by multiple GEPs, and the
52// constant offsets can be nicely folded into the reg+immediate addressing mode
53// (supported by most targets) without using any extra register.
54//
55// For instance, we transform the four GEPs and four loads in the above example
56// into:
57//
58// base = gep a, 0, x, y
59// load base
60// load base + 1 * sizeof(float)
61// load base + 32 * sizeof(float)
62// load base + 33 * sizeof(float)
63//
64// Given the transformed IR, a backend that supports the reg+immediate
65// addressing mode can easily fold the pointer arithmetics into the loads. For
66// example, the NVPTX backend can easily fold the pointer arithmetics into the
67// ld.global.f32 instructions, and the resultant PTX uses much fewer registers.
68//
69// mov.u32 %r1, %tid.x;
70// mov.u32 %r2, %tid.y;
71// mul.wide.u32 %rl2, %r1, 128;
72// mov.u64 %rl3, a;
73// add.s64 %rl4, %rl3, %rl2;
74// mul.wide.u32 %rl5, %r2, 4;
75// add.s64 %rl6, %rl4, %rl5;
76// ld.global.f32 %f1, [%rl6]; // so far the same as unoptimized PTX
77// ld.global.f32 %f2, [%rl6+4]; // much better
78// ld.global.f32 %f3, [%rl6+128]; // much better
79// ld.global.f32 %f4, [%rl6+132]; // much better
80//
81// Another improvement enabled by the LowerGEP flag is to lower a GEP with
82// multiple indices to multiple GEPs with a single index.
83// Such transformation can have following benefits:
84// (1) It can always extract constants in the indices of structure type.
85// (2) After such Lowering, there are more optimization opportunities such as
86// CSE, LICM and CGP.
87//
88// E.g. The following GEPs have multiple indices:
89// BB1:
90// %p = getelementptr [10 x %struct], ptr %ptr, i64 %i, i64 %j1, i32 3
91// load %p
92// ...
93// BB2:
94// %p2 = getelementptr [10 x %struct], ptr %ptr, i64 %i, i64 %j1, i32 2
95// load %p2
96// ...
97//
98// We can not do CSE to the common part related to index "i64 %i". Lowering
99// GEPs can achieve such goals.
100//
101// This pass will lower a GEP with multiple indices into multiple GEPs with a
102// single index:
103// BB1:
104// %2 = mul i64 %i, length_of_10xstruct ; CSE opportunity
105// %3 = getelementptr i8, ptr %ptr, i64 %2 ; CSE opportunity
106// %4 = mul i64 %j1, length_of_struct
107// %5 = getelementptr i8, ptr %3, i64 %4
108// %p = getelementptr i8, ptr %5, struct_field_3 ; Constant offset
109// load %p
110// ...
111// BB2:
112// %8 = mul i64 %i, length_of_10xstruct ; CSE opportunity
113// %9 = getelementptr i8, ptr %ptr, i64 %8 ; CSE opportunity
114// %10 = mul i64 %j2, length_of_struct
115// %11 = getelementptr i8, ptr %9, i64 %10
116// %p2 = getelementptr i8, ptr %11, struct_field_2 ; Constant offset
117// load %p2
118// ...
119//
120// Lowering GEPs can also benefit other passes such as LICM and CGP.
121// LICM (Loop Invariant Code Motion) can not hoist/sink a GEP of multiple
122// indices if one of the index is variant. If we lower such GEP into invariant
123// parts and variant parts, LICM can hoist/sink those invariant parts.
124// CGP (CodeGen Prepare) tries to sink address calculations that match the
125// target's addressing modes. A GEP with multiple indices may not match and will
126// not be sunk. If we lower such GEP into smaller parts, CGP may sink some of
127// them. So we end up with a better addressing mode.
128//
129//===----------------------------------------------------------------------===//
130
132#include "llvm/ADT/APInt.h"
133#include "llvm/ADT/DenseMap.h"
135#include "llvm/ADT/SmallVector.h"
141#include "llvm/IR/BasicBlock.h"
142#include "llvm/IR/Constant.h"
143#include "llvm/IR/Constants.h"
144#include "llvm/IR/DataLayout.h"
145#include "llvm/IR/DerivedTypes.h"
146#include "llvm/IR/Dominators.h"
147#include "llvm/IR/Function.h"
149#include "llvm/IR/IRBuilder.h"
150#include "llvm/IR/InstrTypes.h"
151#include "llvm/IR/Instruction.h"
152#include "llvm/IR/Instructions.h"
153#include "llvm/IR/Module.h"
154#include "llvm/IR/PassManager.h"
155#include "llvm/IR/PatternMatch.h"
156#include "llvm/IR/Type.h"
157#include "llvm/IR/User.h"
158#include "llvm/IR/Value.h"
160#include "llvm/Pass.h"
161#include "llvm/Support/Casting.h"
167#include <cassert>
168#include <cstdint>
169#include <string>
170
171using namespace llvm;
172using namespace llvm::PatternMatch;
173
175 "disable-separate-const-offset-from-gep", cl::init(false),
176 cl::desc("Do not separate the constant offset from a GEP instruction"),
177 cl::Hidden);
178
179// Setting this flag may emit false positives when the input module already
180// contains dead instructions. Therefore, we set it only in unit tests that are
181// free of dead code.
182static cl::opt<bool>
183 VerifyNoDeadCode("reassociate-geps-verify-no-dead-code", cl::init(false),
184 cl::desc("Verify this pass produces no dead code"),
185 cl::Hidden);
186
187namespace {
188
189/// A helper class for separating a constant offset from a GEP index.
190///
191/// In real programs, a GEP index may be more complicated than a simple addition
192/// of something and a constant integer which can be trivially splitted. For
193/// example, to split ((a << 3) | 5) + b, we need to search deeper for the
194/// constant offset, so that we can separate the index to (a << 3) + b and 5.
195///
196/// Therefore, this class looks into the expression that computes a given GEP
197/// index, and tries to find a constant integer that can be hoisted to the
198/// outermost level of the expression as an addition. Not every constant in an
199/// expression can jump out. e.g., we cannot transform (b * (a + 5)) to (b * a +
200/// 5); nor can we transform (3 * (a + 5)) to (3 * a + 5), however in this case,
201/// -instcombine probably already optimized (3 * (a + 5)) to (3 * a + 15).
202class ConstantOffsetExtractor {
203public:
204 /// Extracts a constant offset from the given GEP index. It returns the
205 /// new index representing the remainder (equal to the original index minus
206 /// the constant offset), or nullptr if we cannot extract a constant offset.
207 /// \p Idx The given GEP index
208 /// \p GEP The given GEP
209 /// \p UserChainTail Outputs the tail of UserChain so that we can
210 /// garbage-collect unused instructions in UserChain.
211 /// \p PreservesNUW Outputs whether the extraction allows preserving the
212 /// GEP's nuw flag, if it has one.
213 static Value *Extract(Value *Idx, GetElementPtrInst *GEP,
214 User *&UserChainTail, bool &PreservesNUW);
215
216 /// Looks for a constant offset from the given GEP index without extracting
217 /// it. It returns the numeric value of the extracted constant offset (0 if
218 /// failed). The meaning of the arguments are the same as Extract.
219 static int64_t Find(Value *Idx, GetElementPtrInst *GEP);
220
221private:
222 ConstantOffsetExtractor(BasicBlock::iterator InsertionPt)
223 : IP(InsertionPt), DL(InsertionPt->getDataLayout()) {}
224
225 /// Searches the expression that computes V for a non-zero constant C s.t.
226 /// V can be reassociated into the form V' + C. If the searching is
227 /// successful, returns C and update UserChain as a def-use chain from C to V;
228 /// otherwise, UserChain is empty.
229 ///
230 /// \p V The given expression
231 /// \p SignExtended Whether V will be sign-extended in the computation of the
232 /// GEP index
233 /// \p ZeroExtended Whether V will be zero-extended in the computation of the
234 /// GEP index
235 /// \p NonNegative Whether V is guaranteed to be non-negative. For example,
236 /// an index of an inbounds GEP is guaranteed to be
237 /// non-negative. Levaraging this, we can better split
238 /// inbounds GEPs.
239 APInt find(Value *V, bool SignExtended, bool ZeroExtended, bool NonNegative);
240
241 /// A helper function to look into both operands of a binary operator.
242 APInt findInEitherOperand(BinaryOperator *BO, bool SignExtended,
243 bool ZeroExtended);
244
245 /// After finding the constant offset C from the GEP index I, we build a new
246 /// index I' s.t. I' + C = I. This function builds and returns the new
247 /// index I' according to UserChain produced by function "find".
248 ///
249 /// The building conceptually takes two steps:
250 /// 1) iteratively distribute sext/zext/trunc towards the leaves of the
251 /// expression tree that computes I
252 /// 2) reassociate the expression tree to the form I' + C.
253 ///
254 /// For example, to extract the 5 from sext(a + (b + 5)), we first distribute
255 /// sext to a, b and 5 so that we have
256 /// sext(a) + (sext(b) + 5).
257 /// Then, we reassociate it to
258 /// (sext(a) + sext(b)) + 5.
259 /// Given this form, we know I' is sext(a) + sext(b).
260 Value *rebuildWithoutConstOffset();
261
262 /// After the first step of rebuilding the GEP index without the constant
263 /// offset, distribute sext/zext/trunc to the operands of all operators in
264 /// UserChain. e.g., zext(sext(a + (b + 5)) (assuming no overflow) =>
265 /// zext(sext(a)) + (zext(sext(b)) + zext(sext(5))).
266 ///
267 /// The function also updates UserChain to point to new subexpressions after
268 /// distributing sext/zext/trunc. e.g., the old UserChain of the above example
269 /// is
270 /// 5 -> b + 5 -> a + (b + 5) -> sext(...) -> zext(sext(...)),
271 /// and the new UserChain is
272 /// zext(sext(5)) -> zext(sext(b)) + zext(sext(5)) ->
273 /// zext(sext(a)) + (zext(sext(b)) + zext(sext(5))
274 ///
275 /// \p ChainIndex The index to UserChain. ChainIndex is initially
276 /// UserChain.size() - 1, and is decremented during
277 /// the recursion.
278 Value *distributeCastsAndCloneChain(unsigned ChainIndex);
279
280 /// Reassociates the GEP index to the form I' + C and returns I'.
281 Value *removeConstOffset(unsigned ChainIndex);
282
283 /// A helper function to apply CastInsts, a list of sext/zext/trunc, to value
284 /// V. e.g., if CastInsts = [sext i32 to i64, zext i16 to i32], this function
285 /// returns "sext i32 (zext i16 V to i32) to i64".
286 Value *applyCasts(Value *V);
287
288 /// A helper function that returns whether we can trace into the operands
289 /// of binary operator BO for a constant offset.
290 ///
291 /// \p SignExtended Whether BO is surrounded by sext
292 /// \p ZeroExtended Whether BO is surrounded by zext
293 /// \p NonNegative Whether BO is known to be non-negative, e.g., an in-bound
294 /// array index.
295 bool CanTraceInto(bool SignExtended, bool ZeroExtended, BinaryOperator *BO,
296 bool NonNegative);
297
298 /// Analyze XOR instruction to extract disjoint constant bits that behave
299 /// like addition operations for improved address mode folding.
300 APInt extractDisjointBitsFromXor(BinaryOperator *XorInst);
301
302 /// The path from the constant offset to the old GEP index. e.g., if the GEP
303 /// index is "a * b + (c + 5)". After running function find, UserChain[0] will
304 /// be the constant 5, UserChain[1] will be the subexpression "c + 5", and
305 /// UserChain[2] will be the entire expression "a * b + (c + 5)".
306 ///
307 /// This path helps to rebuild the new GEP index.
308 SmallVector<User *, 8> UserChain;
309
310 /// A data structure used in rebuildWithoutConstOffset. Contains all
311 /// sext/zext/trunc instructions along UserChain.
313
314 /// Insertion position of cloned instructions.
316
317 const DataLayout &DL;
318};
319
320/// A pass that tries to split every GEP in the function into a variadic
321/// base and a constant offset. It is a FunctionPass because searching for the
322/// constant offset may inspect other basic blocks.
323class SeparateConstOffsetFromGEPLegacyPass : public FunctionPass {
324public:
325 static char ID;
326
327 SeparateConstOffsetFromGEPLegacyPass(bool LowerGEP = false)
328 : FunctionPass(ID), LowerGEP(LowerGEP) {
331 }
332
333 void getAnalysisUsage(AnalysisUsage &AU) const override {
334 AU.addRequired<DominatorTreeWrapperPass>();
335 AU.addRequired<TargetTransformInfoWrapperPass>();
336 AU.addRequired<LoopInfoWrapperPass>();
337 AU.setPreservesCFG();
338 AU.addRequired<TargetLibraryInfoWrapperPass>();
339 }
340
341 bool runOnFunction(Function &F) override;
342
343private:
344 bool LowerGEP;
345};
346
347/// A pass that tries to split every GEP in the function into a variadic
348/// base and a constant offset. It is a FunctionPass because searching for the
349/// constant offset may inspect other basic blocks.
350class SeparateConstOffsetFromGEP {
351public:
352 SeparateConstOffsetFromGEP(
353 DominatorTree *DT, LoopInfo *LI, TargetLibraryInfo *TLI,
354 function_ref<TargetTransformInfo &(Function &)> GetTTI, bool LowerGEP)
355 : DT(DT), LI(LI), TLI(TLI), GetTTI(GetTTI), LowerGEP(LowerGEP) {}
356
357 bool run(Function &F);
358
359private:
360 /// Track the operands of an add or sub.
361 using ExprKey = std::pair<Value *, Value *>;
362
363 /// Create a pair for use as a map key for a commutable operation.
364 static ExprKey createNormalizedCommutablePair(Value *A, Value *B) {
365 if (A < B)
366 return {A, B};
367 return {B, A};
368 }
369
370 /// Tries to split the given GEP into a variadic base and a constant offset,
371 /// and returns true if the splitting succeeds.
372 bool splitGEP(GetElementPtrInst *GEP);
373
374 /// Tries to reorder the given GEP with the GEP that produces the base if
375 /// doing so results in producing a constant offset as the outermost
376 /// index.
377 bool reorderGEP(GetElementPtrInst *GEP, TargetTransformInfo &TTI);
378
379 /// Lower a GEP with multiple indices into multiple GEPs with a single index.
380 /// Function splitGEP already split the original GEP into a variadic part and
381 /// a constant offset (i.e., AccumulativeByteOffset). This function lowers the
382 /// variadic part into a set of GEPs with a single index and applies
383 /// AccumulativeByteOffset to it.
384 /// \p Variadic The variadic part of the original GEP.
385 /// \p AccumulativeByteOffset The constant offset.
386 void lowerToSingleIndexGEPs(GetElementPtrInst *Variadic,
387 int64_t AccumulativeByteOffset);
388
389 /// Finds the constant offset within each index and accumulates them. If
390 /// LowerGEP is true, it finds in indices of both sequential and structure
391 /// types, otherwise it only finds in sequential indices. The output
392 /// NeedsExtraction indicates whether we successfully find a non-zero constant
393 /// offset.
394 int64_t accumulateByteOffset(GetElementPtrInst *GEP, bool &NeedsExtraction);
395
396 /// Canonicalize array indices to pointer-size integers. This helps to
397 /// simplify the logic of splitting a GEP. For example, if a + b is a
398 /// pointer-size integer, we have
399 /// gep base, a + b = gep (gep base, a), b
400 /// However, this equality may not hold if the size of a + b is smaller than
401 /// the pointer size, because LLVM conceptually sign-extends GEP indices to
402 /// pointer size before computing the address
403 /// (http://llvm.org/docs/LangRef.html#id181).
404 ///
405 /// This canonicalization is very likely already done in clang and
406 /// instcombine. Therefore, the program will probably remain the same.
407 ///
408 /// Returns true if the module changes.
409 ///
410 /// Verified in @i32_add in split-gep.ll
411 bool canonicalizeArrayIndicesToIndexSize(GetElementPtrInst *GEP);
412
413 /// Optimize sext(a)+sext(b) to sext(a+b) when a+b can't sign overflow.
414 /// SeparateConstOffsetFromGEP distributes a sext to leaves before extracting
415 /// the constant offset. After extraction, it becomes desirable to reunion the
416 /// distributed sexts. For example,
417 ///
418 /// &a[sext(i +nsw (j +nsw 5)]
419 /// => distribute &a[sext(i) +nsw (sext(j) +nsw 5)]
420 /// => constant extraction &a[sext(i) + sext(j)] + 5
421 /// => reunion &a[sext(i +nsw j)] + 5
422 bool reuniteExts(Function &F);
423
424 /// A helper that reunites sexts in an instruction.
425 bool reuniteExts(Instruction *I);
426
427 /// Find the closest dominator of <Dominatee> that is equivalent to <Key>.
428 Instruction *findClosestMatchingDominator(
429 ExprKey Key, Instruction *Dominatee,
430 DenseMap<ExprKey, SmallVector<Instruction *, 2>> &DominatingExprs);
431
432 /// Verify F is free of dead code.
433 void verifyNoDeadCode(Function &F);
434
435 bool hasMoreThanOneUseInLoop(Value *v, Loop *L);
436
437 // Swap the index operand of two GEP.
438 void swapGEPOperand(GetElementPtrInst *First, GetElementPtrInst *Second);
439
440 // Check if it is safe to swap operand of two GEP.
441 bool isLegalToSwapOperand(GetElementPtrInst *First, GetElementPtrInst *Second,
442 Loop *CurLoop);
443
444 const DataLayout *DL = nullptr;
445 DominatorTree *DT = nullptr;
446 LoopInfo *LI;
447 TargetLibraryInfo *TLI;
448 // Retrieved lazily since not always used.
449 function_ref<TargetTransformInfo &(Function &)> GetTTI;
450
451 /// Whether to lower a GEP with multiple indices into arithmetic operations or
452 /// multiple GEPs with a single index.
453 bool LowerGEP;
454
455 DenseMap<ExprKey, SmallVector<Instruction *, 2>> DominatingAdds;
456 DenseMap<ExprKey, SmallVector<Instruction *, 2>> DominatingSubs;
457};
458
459} // end anonymous namespace
460
461char SeparateConstOffsetFromGEPLegacyPass::ID = 0;
462
464 SeparateConstOffsetFromGEPLegacyPass, "separate-const-offset-from-gep",
465 "Split GEPs to a variadic base and a constant offset for better CSE", false,
466 false)
473 SeparateConstOffsetFromGEPLegacyPass, "separate-const-offset-from-gep",
474 "Split GEPs to a variadic base and a constant offset for better CSE", false,
475 false)
476
478 return new SeparateConstOffsetFromGEPLegacyPass(LowerGEP);
479}
480
481bool ConstantOffsetExtractor::CanTraceInto(bool SignExtended,
482 bool ZeroExtended,
483 BinaryOperator *BO,
484 bool NonNegative) {
485 // We only consider ADD, SUB and OR, because a non-zero constant found in
486 // expressions composed of these operations can be easily hoisted as a
487 // constant offset by reassociation.
488 if (BO->getOpcode() != Instruction::Add &&
489 BO->getOpcode() != Instruction::Sub &&
490 BO->getOpcode() != Instruction::Or) {
491 return false;
492 }
493
494 Value *LHS = BO->getOperand(0), *RHS = BO->getOperand(1);
495 // Do not trace into "or" unless it is equivalent to "add nuw nsw".
496 // This is the case if the or's disjoint flag is set.
497 if (BO->getOpcode() == Instruction::Or &&
498 !cast<PossiblyDisjointInst>(BO)->isDisjoint())
499 return false;
500
501 // FIXME: We don't currently support constants from the RHS of subs,
502 // when we are zero-extended, because we need a way to zero-extended
503 // them before they are negated.
504 if (ZeroExtended && !SignExtended && BO->getOpcode() == Instruction::Sub)
505 return false;
506
507 // In addition, tracing into BO requires that its surrounding sext/zext/trunc
508 // (if any) is distributable to both operands.
509 //
510 // Suppose BO = A op B.
511 // SignExtended | ZeroExtended | Distributable?
512 // --------------+--------------+----------------------------------
513 // 0 | 0 | true because no s/zext exists
514 // 0 | 1 | zext(BO) == zext(A) op zext(B)
515 // 1 | 0 | sext(BO) == sext(A) op sext(B)
516 // 1 | 1 | zext(sext(BO)) ==
517 // | | zext(sext(A)) op zext(sext(B))
518 if (BO->getOpcode() == Instruction::Add && !ZeroExtended && NonNegative) {
519 // If a + b >= 0 and (a >= 0 or b >= 0), then
520 // sext(a + b) = sext(a) + sext(b)
521 // even if the addition is not marked nsw.
522 //
523 // Leveraging this invariant, we can trace into an sext'ed inbound GEP
524 // index if the constant offset is non-negative.
525 //
526 // Verified in @sext_add in split-gep.ll.
527 if (ConstantInt *ConstLHS = dyn_cast<ConstantInt>(LHS)) {
528 if (!ConstLHS->isNegative())
529 return true;
530 }
531 if (ConstantInt *ConstRHS = dyn_cast<ConstantInt>(RHS)) {
532 if (!ConstRHS->isNegative())
533 return true;
534 }
535 }
536
537 // sext (add/sub nsw A, B) == add/sub nsw (sext A), (sext B)
538 // zext (add/sub nuw A, B) == add/sub nuw (zext A), (zext B)
539 if (BO->getOpcode() == Instruction::Add ||
540 BO->getOpcode() == Instruction::Sub) {
541 if (SignExtended && !BO->hasNoSignedWrap())
542 return false;
543 if (ZeroExtended && !BO->hasNoUnsignedWrap())
544 return false;
545 }
546
547 return true;
548}
549
550APInt ConstantOffsetExtractor::findInEitherOperand(BinaryOperator *BO,
551 bool SignExtended,
552 bool ZeroExtended) {
553 // Save off the current height of the chain, in case we need to restore it.
554 size_t ChainLength = UserChain.size();
555
556 // BO being non-negative does not shed light on whether its operands are
557 // non-negative. Clear the NonNegative flag here.
558 APInt ConstantOffset = find(BO->getOperand(0), SignExtended, ZeroExtended,
559 /* NonNegative */ false);
560 // If we found a constant offset in the left operand, stop and return that.
561 // This shortcut might cause us to miss opportunities of combining the
562 // constant offsets in both operands, e.g., (a + 4) + (b + 5) => (a + b) + 9.
563 // However, such cases are probably already handled by -instcombine,
564 // given this pass runs after the standard optimizations.
565 if (ConstantOffset != 0) return ConstantOffset;
566
567 // Reset the chain back to where it was when we started exploring this node,
568 // since visiting the LHS didn't pan out.
569 UserChain.resize(ChainLength);
570
571 ConstantOffset = find(BO->getOperand(1), SignExtended, ZeroExtended,
572 /* NonNegative */ false);
573 // If U is a sub operator, negate the constant offset found in the right
574 // operand.
575 if (BO->getOpcode() == Instruction::Sub)
576 ConstantOffset = -ConstantOffset;
577
578 // If RHS wasn't a suitable candidate either, reset the chain again.
579 if (ConstantOffset == 0)
580 UserChain.resize(ChainLength);
581
582 return ConstantOffset;
583}
584
585APInt ConstantOffsetExtractor::find(Value *V, bool SignExtended,
586 bool ZeroExtended, bool NonNegative) {
587 // TODO(jingyue): We could trace into integer/pointer casts, such as
588 // inttoptr, ptrtoint, bitcast, and addrspacecast. We choose to handle only
589 // integers because it gives good enough results for our benchmarks.
590 unsigned BitWidth = cast<IntegerType>(V->getType())->getBitWidth();
591
592 // We cannot do much with Values that are not a User, such as an Argument.
593 User *U = dyn_cast<User>(V);
594 if (U == nullptr) return APInt(BitWidth, 0);
595
596 APInt ConstantOffset(BitWidth, 0);
597 if (ConstantInt *CI = dyn_cast<ConstantInt>(V)) {
598 // Hooray, we found it!
599 ConstantOffset = CI->getValue();
600 } else if (BinaryOperator *BO = dyn_cast<BinaryOperator>(V)) {
601 // Trace into subexpressions for more hoisting opportunities.
602 if (CanTraceInto(SignExtended, ZeroExtended, BO, NonNegative))
603 ConstantOffset = findInEitherOperand(BO, SignExtended, ZeroExtended);
604 // Handle XOR with disjoint bits that can be treated as addition.
605 else if (BO->getOpcode() == Instruction::Xor)
606 ConstantOffset = extractDisjointBitsFromXor(BO);
607 } else if (isa<TruncInst>(V)) {
608 ConstantOffset =
609 find(U->getOperand(0), SignExtended, ZeroExtended, NonNegative)
610 .trunc(BitWidth);
611 } else if (isa<SExtInst>(V)) {
612 ConstantOffset = find(U->getOperand(0), /* SignExtended */ true,
613 ZeroExtended, NonNegative).sext(BitWidth);
614 } else if (isa<ZExtInst>(V)) {
615 // As an optimization, we can clear the SignExtended flag because
616 // sext(zext(a)) = zext(a). Verified in @sext_zext in split-gep.ll.
617 //
618 // Clear the NonNegative flag, because zext(a) >= 0 does not imply a >= 0.
619 ConstantOffset =
620 find(U->getOperand(0), /* SignExtended */ false,
621 /* ZeroExtended */ true, /* NonNegative */ false).zext(BitWidth);
622 }
623
624 // If we found a non-zero constant offset, add it to the path for
625 // rebuildWithoutConstOffset. Zero is a valid constant offset, but doesn't
626 // help this optimization.
627 if (ConstantOffset != 0)
628 UserChain.push_back(U);
629 return ConstantOffset;
630}
631
632Value *ConstantOffsetExtractor::applyCasts(Value *V) {
633 Value *Current = V;
634 // CastInsts is built in the use-def order. Therefore, we apply them to V
635 // in the reversed order.
636 for (CastInst *I : llvm::reverse(CastInsts)) {
637 if (Constant *C = dyn_cast<Constant>(Current)) {
638 // Try to constant fold the cast.
639 Current = ConstantFoldCastOperand(I->getOpcode(), C, I->getType(), DL);
640 if (Current)
641 continue;
642 }
643
644 Instruction *Cast = I->clone();
645 Cast->setOperand(0, Current);
646 // In ConstantOffsetExtractor::find we do not analyze nuw/nsw for trunc, so
647 // we assume that it is ok to redistribute trunc over add/sub/or. But for
648 // example (add (trunc nuw A), (trunc nuw B)) is more poisonous than (trunc
649 // nuw (add A, B))). To make such redistributions legal we drop all the
650 // poison generating flags from cloned trunc instructions here.
651 if (isa<TruncInst>(Cast))
653 Cast->insertBefore(*IP->getParent(), IP);
654 Current = Cast;
655 }
656 return Current;
657}
658
659Value *ConstantOffsetExtractor::rebuildWithoutConstOffset() {
660 distributeCastsAndCloneChain(UserChain.size() - 1);
661 // Remove all nullptrs (used to be sext/zext/trunc) from UserChain.
662 unsigned NewSize = 0;
663 for (User *I : UserChain) {
664 if (I != nullptr) {
665 UserChain[NewSize] = I;
666 NewSize++;
667 }
668 }
669 UserChain.resize(NewSize);
670 return removeConstOffset(UserChain.size() - 1);
671}
672
673Value *
674ConstantOffsetExtractor::distributeCastsAndCloneChain(unsigned ChainIndex) {
675 User *U = UserChain[ChainIndex];
676 if (ChainIndex == 0) {
678 // If U is a ConstantInt, applyCasts will return a ConstantInt as well.
679 return UserChain[ChainIndex] = cast<ConstantInt>(applyCasts(U));
680 }
681
682 if (CastInst *Cast = dyn_cast<CastInst>(U)) {
683 assert(
684 (isa<SExtInst>(Cast) || isa<ZExtInst>(Cast) || isa<TruncInst>(Cast)) &&
685 "Only following instructions can be traced: sext, zext & trunc");
686 CastInsts.push_back(Cast);
687 UserChain[ChainIndex] = nullptr;
688 return distributeCastsAndCloneChain(ChainIndex - 1);
689 }
690
691 // Function find only trace into BinaryOperator and CastInst.
692 BinaryOperator *BO = cast<BinaryOperator>(U);
693 // OpNo = which operand of BO is UserChain[ChainIndex - 1]
694 unsigned OpNo = (BO->getOperand(0) == UserChain[ChainIndex - 1] ? 0 : 1);
695 Value *TheOther = applyCasts(BO->getOperand(1 - OpNo));
696 Value *NextInChain = distributeCastsAndCloneChain(ChainIndex - 1);
697
698 BinaryOperator *NewBO = nullptr;
699 if (OpNo == 0) {
700 NewBO = BinaryOperator::Create(BO->getOpcode(), NextInChain, TheOther,
701 BO->getName(), IP);
702 } else {
703 NewBO = BinaryOperator::Create(BO->getOpcode(), TheOther, NextInChain,
704 BO->getName(), IP);
705 }
706 return UserChain[ChainIndex] = NewBO;
707}
708
709Value *ConstantOffsetExtractor::removeConstOffset(unsigned ChainIndex) {
710 if (ChainIndex == 0) {
711 assert(isa<ConstantInt>(UserChain[ChainIndex]));
712 return ConstantInt::getNullValue(UserChain[ChainIndex]->getType());
713 }
714
715 BinaryOperator *BO = cast<BinaryOperator>(UserChain[ChainIndex]);
716 assert((BO->use_empty() || BO->hasOneUse()) &&
717 "distributeCastsAndCloneChain clones each BinaryOperator in "
718 "UserChain, so no one should be used more than "
719 "once");
720
721 unsigned OpNo = (BO->getOperand(0) == UserChain[ChainIndex - 1] ? 0 : 1);
722 assert(BO->getOperand(OpNo) == UserChain[ChainIndex - 1]);
723 Value *NextInChain = removeConstOffset(ChainIndex - 1);
724 Value *TheOther = BO->getOperand(1 - OpNo);
725
726 if (ConstantInt *CI = dyn_cast<ConstantInt>(NextInChain)) {
727 if (CI->isZero()) {
728 // Custom XOR handling for disjoint bits - preserves original XOR
729 // with non-disjoint constant bits.
730 // TODO: The design should be updated to support partial constant
731 // extraction.
732 if (BO->getOpcode() == Instruction::Xor)
733 return BO;
734
735 // If NextInChain is 0 and not the LHS of a sub, we can simplify the
736 // sub-expression to be just TheOther.
737 if (!(BO->getOpcode() == Instruction::Sub && OpNo == 0))
738 return TheOther;
739 }
740 }
741
742 BinaryOperator::BinaryOps NewOp = BO->getOpcode();
743 if (BO->getOpcode() == Instruction::Or) {
744 // Rebuild "or" as "add", because "or" may be invalid for the new
745 // expression.
746 //
747 // For instance, given
748 // a | (b + 5) where a and b + 5 have no common bits,
749 // we can extract 5 as the constant offset.
750 //
751 // However, reusing the "or" in the new index would give us
752 // (a | b) + 5
753 // which does not equal a | (b + 5).
754 //
755 // Replacing the "or" with "add" is fine, because
756 // a | (b + 5) = a + (b + 5) = (a + b) + 5
757 NewOp = Instruction::Add;
758 }
759
760 BinaryOperator *NewBO;
761 if (OpNo == 0) {
762 NewBO = BinaryOperator::Create(NewOp, NextInChain, TheOther, "", IP);
763 } else {
764 NewBO = BinaryOperator::Create(NewOp, TheOther, NextInChain, "", IP);
765 }
766 NewBO->takeName(BO);
767 return NewBO;
768}
769
770/// Analyze XOR instruction to extract disjoint constant bits for address
771/// folding
772///
773/// This function identifies bits in an XOR constant operand that are disjoint
774/// from the base operand's known set bits. For these disjoint bits, XOR behaves
775/// identically to addition, allowing us to extract them as constant offsets
776/// that can be folded into addressing modes.
777///
778/// Transformation: `Base ^ Const` becomes `(Base ^ NonDisjointBits) +
779/// DisjointBits` where DisjointBits = Const & KnownZeros(Base)
780///
781/// Example with ptr having known-zero low bit:
782/// Original: `xor %ptr, 3` ; 3 = 0b11
783/// Analysis: DisjointBits = 3 & KnownZeros(%ptr) = 0b11 & 0b01 = 0b01
784/// Result: `(xor %ptr, 2) + 1` where 1 can be folded into address mode
785///
786/// \param XorInst The XOR binary operator to analyze
787/// \return APInt containing the disjoint bits that can be extracted as offset,
788/// or zero if no disjoint bits exist
789APInt ConstantOffsetExtractor::extractDisjointBitsFromXor(
790 BinaryOperator *XorInst) {
791 assert(XorInst && XorInst->getOpcode() == Instruction::Xor &&
792 "Expected XOR instruction");
793
794 const unsigned BitWidth = XorInst->getType()->getScalarSizeInBits();
795 Value *BaseOperand;
796 ConstantInt *XorConstant;
797
798 // Match pattern: xor BaseOperand, Constant.
799 if (!match(XorInst, m_Xor(m_Value(BaseOperand), m_ConstantInt(XorConstant))))
800 return APInt::getZero(BitWidth);
801
802 // Compute known bits for the base operand.
803 const SimplifyQuery SQ(DL);
804 const KnownBits BaseKnownBits = computeKnownBits(BaseOperand, SQ);
805 const APInt &ConstantValue = XorConstant->getValue();
806
807 // Identify disjoint bits: constant bits that are known zero in base.
808 const APInt DisjointBits = ConstantValue & BaseKnownBits.Zero;
809
810 // Early exit if no disjoint bits found.
811 if (DisjointBits.isZero())
812 return APInt::getZero(BitWidth);
813
814 // Compute the remaining non-disjoint bits that stay in the XOR.
815 const APInt NonDisjointBits = ConstantValue & ~DisjointBits;
816
817 // FIXME: Enhance XOR constant extraction to handle nested binary operations.
818 // Currently we only extract disjoint bits from the immediate XOR constant,
819 // but we could recursively process cases like:
820 // xor (add %base, C1), C2 -> add %base, (C1 ^ disjoint_bits(C2))
821 // This requires careful analysis to ensure the transformation preserves
822 // semantics, particularly around sign extension and overflow behavior.
823
824 // Add the non-disjoint constant to the user chain for later transformation
825 // This will replace the original constant in the XOR with the new
826 // constant.
827 UserChain.push_back(ConstantInt::get(XorInst->getType(), NonDisjointBits));
828 return DisjointBits;
829}
830
831/// A helper function to check if reassociating through an entry in the user
832/// chain would invalidate the GEP's nuw flag.
833static bool allowsPreservingNUW(const User *U) {
834 if (const BinaryOperator *BO = dyn_cast<BinaryOperator>(U)) {
835 // Binary operations need to be effectively add nuw.
836 auto Opcode = BO->getOpcode();
837 if (Opcode == BinaryOperator::Or) {
838 // Ors are only considered here if they are disjoint. The addition that
839 // they represent in this case is NUW.
840 assert(cast<PossiblyDisjointInst>(BO)->isDisjoint());
841 return true;
842 }
843 return Opcode == BinaryOperator::Add && BO->hasNoUnsignedWrap();
844 }
845 // UserChain can only contain ConstantInt, CastInst, or BinaryOperator.
846 // Among the possible CastInsts, only trunc without nuw is a problem: If it
847 // is distributed through an add nuw, wrapping may occur:
848 // "add nuw trunc(a), trunc(b)" is more poisonous than "trunc(add nuw a, b)"
849 if (const TruncInst *TI = dyn_cast<TruncInst>(U))
850 return TI->hasNoUnsignedWrap();
851 assert((isa<CastInst>(U) || isa<ConstantInt>(U)) && "Unexpected User.");
852 return true;
853}
854
855Value *ConstantOffsetExtractor::Extract(Value *Idx, GetElementPtrInst *GEP,
856 User *&UserChainTail,
857 bool &PreservesNUW) {
858 ConstantOffsetExtractor Extractor(GEP->getIterator());
859 // Find a non-zero constant offset first.
860 APInt ConstantOffset =
861 Extractor.find(Idx, /* SignExtended */ false, /* ZeroExtended */ false,
862 GEP->isInBounds());
863 if (ConstantOffset == 0) {
864 UserChainTail = nullptr;
865 PreservesNUW = true;
866 return nullptr;
867 }
868
869 PreservesNUW = all_of(Extractor.UserChain, allowsPreservingNUW);
870
871 // Separates the constant offset from the GEP index.
872 Value *IdxWithoutConstOffset = Extractor.rebuildWithoutConstOffset();
873 UserChainTail = Extractor.UserChain.back();
874 return IdxWithoutConstOffset;
875}
876
877int64_t ConstantOffsetExtractor::Find(Value *Idx, GetElementPtrInst *GEP) {
878 // If Idx is an index of an inbound GEP, Idx is guaranteed to be non-negative.
879 return ConstantOffsetExtractor(GEP->getIterator())
880 .find(Idx, /* SignExtended */ false, /* ZeroExtended */ false,
881 GEP->isInBounds())
882 .getSExtValue();
883}
884
885bool SeparateConstOffsetFromGEP::canonicalizeArrayIndicesToIndexSize(
886 GetElementPtrInst *GEP) {
887 bool Changed = false;
888 Type *PtrIdxTy = DL->getIndexType(GEP->getType());
890 for (User::op_iterator I = GEP->op_begin() + 1, E = GEP->op_end();
891 I != E; ++I, ++GTI) {
892 // Skip struct member indices which must be i32.
893 if (GTI.isSequential()) {
894 if ((*I)->getType() != PtrIdxTy) {
895 *I = CastInst::CreateIntegerCast(*I, PtrIdxTy, true, "idxprom",
896 GEP->getIterator());
897 Changed = true;
898 }
899 }
900 }
901 return Changed;
902}
903
904int64_t
905SeparateConstOffsetFromGEP::accumulateByteOffset(GetElementPtrInst *GEP,
906 bool &NeedsExtraction) {
907 NeedsExtraction = false;
908 int64_t AccumulativeByteOffset = 0;
910 for (unsigned I = 1, E = GEP->getNumOperands(); I != E; ++I, ++GTI) {
911 if (GTI.isSequential()) {
912 // Constant offsets of scalable types are not really constant.
913 if (GTI.getIndexedType()->isScalableTy())
914 continue;
915
916 // Tries to extract a constant offset from this GEP index.
917 int64_t ConstantOffset =
918 ConstantOffsetExtractor::Find(GEP->getOperand(I), GEP);
919 if (ConstantOffset != 0) {
920 NeedsExtraction = true;
921 // A GEP may have multiple indices. We accumulate the extracted
922 // constant offset to a byte offset, and later offset the remainder of
923 // the original GEP with this byte offset.
924 AccumulativeByteOffset +=
925 ConstantOffset * GTI.getSequentialElementStride(*DL);
926 }
927 } else if (LowerGEP) {
928 StructType *StTy = GTI.getStructType();
929 uint64_t Field = cast<ConstantInt>(GEP->getOperand(I))->getZExtValue();
930 // Skip field 0 as the offset is always 0.
931 if (Field != 0) {
932 NeedsExtraction = true;
933 AccumulativeByteOffset +=
934 DL->getStructLayout(StTy)->getElementOffset(Field);
935 }
936 }
937 }
938 return AccumulativeByteOffset;
939}
940
941void SeparateConstOffsetFromGEP::lowerToSingleIndexGEPs(
942 GetElementPtrInst *Variadic, int64_t AccumulativeByteOffset) {
943 IRBuilder<> Builder(Variadic);
944 Type *PtrIndexTy = DL->getIndexType(Variadic->getType());
945
946 Value *ResultPtr = Variadic->getOperand(0);
947 Loop *L = LI->getLoopFor(Variadic->getParent());
948 // Check if the base is not loop invariant or used more than once.
949 bool isSwapCandidate =
950 L && L->isLoopInvariant(ResultPtr) &&
951 !hasMoreThanOneUseInLoop(ResultPtr, L);
952 Value *FirstResult = nullptr;
953
954 gep_type_iterator GTI = gep_type_begin(*Variadic);
955 // Create an ugly GEP for each sequential index. We don't create GEPs for
956 // structure indices, as they are accumulated in the constant offset index.
957 for (unsigned I = 1, E = Variadic->getNumOperands(); I != E; ++I, ++GTI) {
958 if (GTI.isSequential()) {
959 Value *Idx = Variadic->getOperand(I);
960 // Skip zero indices.
961 if (ConstantInt *CI = dyn_cast<ConstantInt>(Idx))
962 if (CI->isZero())
963 continue;
964
965 APInt ElementSize = APInt(PtrIndexTy->getIntegerBitWidth(),
967 // Scale the index by element size.
968 if (ElementSize != 1) {
969 if (ElementSize.isPowerOf2()) {
970 Idx = Builder.CreateShl(
971 Idx, ConstantInt::get(PtrIndexTy, ElementSize.logBase2()));
972 } else {
973 Idx =
974 Builder.CreateMul(Idx, ConstantInt::get(PtrIndexTy, ElementSize));
975 }
976 }
977 // Create an ugly GEP with a single index for each index.
978 ResultPtr = Builder.CreatePtrAdd(ResultPtr, Idx, "uglygep");
979 if (FirstResult == nullptr)
980 FirstResult = ResultPtr;
981 }
982 }
983
984 // Create a GEP with the constant offset index.
985 if (AccumulativeByteOffset != 0) {
986 Value *Offset = ConstantInt::get(PtrIndexTy, AccumulativeByteOffset);
987 ResultPtr = Builder.CreatePtrAdd(ResultPtr, Offset, "uglygep");
988 } else
989 isSwapCandidate = false;
990
991 // If we created a GEP with constant index, and the base is loop invariant,
992 // then we swap the first one with it, so LICM can move constant GEP out
993 // later.
994 auto *FirstGEP = dyn_cast_or_null<GetElementPtrInst>(FirstResult);
995 auto *SecondGEP = dyn_cast<GetElementPtrInst>(ResultPtr);
996 if (isSwapCandidate && isLegalToSwapOperand(FirstGEP, SecondGEP, L))
997 swapGEPOperand(FirstGEP, SecondGEP);
998
999 Variadic->replaceAllUsesWith(ResultPtr);
1000 Variadic->eraseFromParent();
1001}
1002
1003bool SeparateConstOffsetFromGEP::reorderGEP(GetElementPtrInst *GEP,
1004 TargetTransformInfo &TTI) {
1005 auto PtrGEP = dyn_cast<GetElementPtrInst>(GEP->getPointerOperand());
1006 if (!PtrGEP)
1007 return false;
1008
1009 bool NestedNeedsExtraction;
1010 int64_t NestedByteOffset =
1011 accumulateByteOffset(PtrGEP, NestedNeedsExtraction);
1012 if (!NestedNeedsExtraction)
1013 return false;
1014
1015 unsigned AddrSpace = PtrGEP->getPointerAddressSpace();
1016 if (!TTI.isLegalAddressingMode(GEP->getResultElementType(),
1017 /*BaseGV=*/nullptr, NestedByteOffset,
1018 /*HasBaseReg=*/true, /*Scale=*/0, AddrSpace))
1019 return false;
1020
1021 bool GEPInBounds = GEP->isInBounds();
1022 bool PtrGEPInBounds = PtrGEP->isInBounds();
1023 bool IsChainInBounds = GEPInBounds && PtrGEPInBounds;
1024 if (IsChainInBounds) {
1025 auto IsKnownNonNegative = [this](Value *V) {
1026 return isKnownNonNegative(V, *DL);
1027 };
1028 IsChainInBounds &= all_of(GEP->indices(), IsKnownNonNegative);
1029 if (IsChainInBounds)
1030 IsChainInBounds &= all_of(PtrGEP->indices(), IsKnownNonNegative);
1031 }
1032
1033 IRBuilder<> Builder(GEP);
1034 // For trivial GEP chains, we can swap the indices.
1035 Value *NewSrc = Builder.CreateGEP(
1036 GEP->getSourceElementType(), PtrGEP->getPointerOperand(),
1037 SmallVector<Value *, 4>(GEP->indices()), "", IsChainInBounds);
1038 Value *NewGEP = Builder.CreateGEP(PtrGEP->getSourceElementType(), NewSrc,
1039 SmallVector<Value *, 4>(PtrGEP->indices()),
1040 "", IsChainInBounds);
1041 GEP->replaceAllUsesWith(NewGEP);
1043 return true;
1044}
1045
1046bool SeparateConstOffsetFromGEP::splitGEP(GetElementPtrInst *GEP) {
1047 // Skip vector GEPs.
1048 if (GEP->getType()->isVectorTy())
1049 return false;
1050
1051 // If the base of this GEP is a ptradd of a constant, lets pass the constant
1052 // along. This ensures that when we have a chain of GEPs the constant
1053 // offset from each is accumulated.
1054 Value *NewBase;
1055 const APInt *BaseOffset;
1056 const bool ExtractBase =
1057 match(GEP->getPointerOperand(),
1058 m_PtrAdd(m_Value(NewBase), m_APInt(BaseOffset)));
1059
1060 const int64_t BaseByteOffset = ExtractBase ? BaseOffset->getSExtValue() : 0;
1061
1062 // The backend can already nicely handle the case where all indices are
1063 // constant.
1064 if (GEP->hasAllConstantIndices() && !ExtractBase)
1065 return false;
1066
1067 bool Changed = canonicalizeArrayIndicesToIndexSize(GEP);
1068
1069 bool NeedsExtraction;
1070 int64_t AccumulativeByteOffset =
1071 BaseByteOffset + accumulateByteOffset(GEP, NeedsExtraction);
1072
1073 TargetTransformInfo &TTI = GetTTI(*GEP->getFunction());
1074
1075 if (!NeedsExtraction && !ExtractBase) {
1076 Changed |= reorderGEP(GEP, TTI);
1077 return Changed;
1078 }
1079
1080 // If LowerGEP is disabled, before really splitting the GEP, check whether the
1081 // backend supports the addressing mode we are about to produce. If no, this
1082 // splitting probably won't be beneficial.
1083 // If LowerGEP is enabled, even the extracted constant offset can not match
1084 // the addressing mode, we can still do optimizations to other lowered parts
1085 // of variable indices. Therefore, we don't check for addressing modes in that
1086 // case.
1087 if (!LowerGEP) {
1088 unsigned AddrSpace = GEP->getPointerAddressSpace();
1089 if (!TTI.isLegalAddressingMode(GEP->getResultElementType(),
1090 /*BaseGV=*/nullptr, AccumulativeByteOffset,
1091 /*HasBaseReg=*/true, /*Scale=*/0,
1092 AddrSpace)) {
1093 return Changed;
1094 }
1095 }
1096
1097 // Track information for preserving GEP flags.
1098 bool AllOffsetsNonNegative = AccumulativeByteOffset >= 0;
1099 bool AllNUWPreserved = GEP->hasNoUnsignedWrap();
1100 bool NewGEPInBounds = GEP->isInBounds();
1101 bool NewGEPNUSW = GEP->hasNoUnsignedSignedWrap();
1102
1103 // Remove the constant offset in each sequential index. The resultant GEP
1104 // computes the variadic base.
1105 // Notice that we don't remove struct field indices here. If LowerGEP is
1106 // disabled, a structure index is not accumulated and we still use the old
1107 // one. If LowerGEP is enabled, a structure index is accumulated in the
1108 // constant offset. LowerToSingleIndexGEPs will later handle the constant
1109 // offset and won't need a new structure index.
1111 for (unsigned I = 1, E = GEP->getNumOperands(); I != E; ++I, ++GTI) {
1112 if (GTI.isSequential()) {
1113 // Constant offsets of scalable types are not really constant.
1114 if (GTI.getIndexedType()->isScalableTy())
1115 continue;
1116
1117 // Splits this GEP index into a variadic part and a constant offset, and
1118 // uses the variadic part as the new index.
1119 Value *Idx = GEP->getOperand(I);
1120 User *UserChainTail;
1121 bool PreservesNUW;
1122 Value *NewIdx = ConstantOffsetExtractor::Extract(Idx, GEP, UserChainTail,
1123 PreservesNUW);
1124 if (NewIdx != nullptr) {
1125 // Switches to the index with the constant offset removed.
1126 GEP->setOperand(I, NewIdx);
1127 // After switching to the new index, we can garbage-collect UserChain
1128 // and the old index if they are not used.
1131 Idx = NewIdx;
1132 AllNUWPreserved &= PreservesNUW;
1133 }
1134 AllOffsetsNonNegative =
1135 AllOffsetsNonNegative && isKnownNonNegative(Idx, *DL);
1136 }
1137 }
1138 if (ExtractBase) {
1139 GEPOperator *Base = cast<GEPOperator>(GEP->getPointerOperand());
1140 AllNUWPreserved &= Base->hasNoUnsignedWrap();
1141 NewGEPInBounds &= Base->isInBounds();
1142 NewGEPNUSW &= Base->hasNoUnsignedSignedWrap();
1143 AllOffsetsNonNegative &= BaseByteOffset >= 0;
1144
1145 GEP->setOperand(0, NewBase);
1147 }
1148
1149 // Clear the inbounds attribute because the new index may be off-bound.
1150 // e.g.,
1151 //
1152 // b = add i64 a, 5
1153 // addr = gep inbounds float, float* p, i64 b
1154 //
1155 // is transformed to:
1156 //
1157 // addr2 = gep float, float* p, i64 a ; inbounds removed
1158 // addr = gep float, float* addr2, i64 5 ; inbounds removed
1159 //
1160 // If a is -4, although the old index b is in bounds, the new index a is
1161 // off-bound. http://llvm.org/docs/LangRef.html#id181 says "if the
1162 // inbounds keyword is not present, the offsets are added to the base
1163 // address with silently-wrapping two's complement arithmetic".
1164 // Therefore, the final code will be a semantically equivalent.
1165 GEPNoWrapFlags NewGEPFlags = GEPNoWrapFlags::none();
1166
1167 // If the initial GEP was inbounds/nusw and all variable indices and the
1168 // accumulated offsets are non-negative, they can be added in any order and
1169 // the intermediate results are in bounds and don't overflow in a nusw sense.
1170 // So, we can preserve the inbounds/nusw flag for both GEPs.
1171 bool CanPreserveInBoundsNUSW = AllOffsetsNonNegative;
1172
1173 // If the initial GEP was NUW and all operations that we reassociate were NUW
1174 // additions, the resulting GEPs are also NUW.
1175 if (AllNUWPreserved) {
1176 NewGEPFlags |= GEPNoWrapFlags::noUnsignedWrap();
1177 // If the initial GEP additionally had NUSW (or inbounds, which implies
1178 // NUSW), we know that the indices in the initial GEP must all have their
1179 // signbit not set. For indices that are the result of NUW adds, the
1180 // add-operands therefore also don't have their signbit set. Therefore, all
1181 // indices of the resulting GEPs are non-negative -> we can preserve
1182 // the inbounds/nusw flag.
1183 CanPreserveInBoundsNUSW |= NewGEPNUSW;
1184 }
1185
1186 if (CanPreserveInBoundsNUSW) {
1187 if (NewGEPInBounds)
1188 NewGEPFlags |= GEPNoWrapFlags::inBounds();
1189 else if (NewGEPNUSW)
1190 NewGEPFlags |= GEPNoWrapFlags::noUnsignedSignedWrap();
1191 }
1192
1193 GEP->setNoWrapFlags(NewGEPFlags);
1194
1195 // Lowers a GEP to GEPs with a single index.
1196 if (LowerGEP) {
1197 lowerToSingleIndexGEPs(GEP, AccumulativeByteOffset);
1198 return true;
1199 }
1200
1201 // No need to create another GEP if the accumulative byte offset is 0.
1202 if (AccumulativeByteOffset == 0)
1203 return true;
1204
1205 // Offsets the base with the accumulative byte offset.
1206 //
1207 // %gep ; the base
1208 // ... %gep ...
1209 //
1210 // => add the offset
1211 //
1212 // %gep2 ; clone of %gep
1213 // %new.gep = gep i8, %gep2, %offset
1214 // %gep ; will be removed
1215 // ... %gep ...
1216 //
1217 // => replace all uses of %gep with %new.gep and remove %gep
1218 //
1219 // %gep2 ; clone of %gep
1220 // %new.gep = gep i8, %gep2, %offset
1221 // ... %new.gep ...
1222 Instruction *NewGEP = GEP->clone();
1223 NewGEP->insertBefore(GEP->getIterator());
1224
1225 Type *PtrIdxTy = DL->getIndexType(GEP->getType());
1226 IRBuilder<> Builder(GEP);
1227 NewGEP = cast<Instruction>(Builder.CreatePtrAdd(
1228 NewGEP, ConstantInt::get(PtrIdxTy, AccumulativeByteOffset, true),
1229 GEP->getName(), NewGEPFlags));
1230 NewGEP->copyMetadata(*GEP);
1231
1232 GEP->replaceAllUsesWith(NewGEP);
1233 GEP->eraseFromParent();
1234
1235 return true;
1236}
1237
1238bool SeparateConstOffsetFromGEPLegacyPass::runOnFunction(Function &F) {
1239 if (skipFunction(F))
1240 return false;
1241 auto *DT = &getAnalysis<DominatorTreeWrapperPass>().getDomTree();
1242 auto *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo();
1243 auto *TLI = &getAnalysis<TargetLibraryInfoWrapperPass>().getTLI(F);
1244 auto GetTTI = [this](Function &F) -> TargetTransformInfo & {
1245 return this->getAnalysis<TargetTransformInfoWrapperPass>().getTTI(F);
1246 };
1247 SeparateConstOffsetFromGEP Impl(DT, LI, TLI, GetTTI, LowerGEP);
1248 return Impl.run(F);
1249}
1250
1251bool SeparateConstOffsetFromGEP::run(Function &F) {
1253 return false;
1254
1255 DL = &F.getDataLayout();
1256 bool Changed = false;
1257
1258 ReversePostOrderTraversal<Function *> RPOT(&F);
1259 for (BasicBlock *B : RPOT) {
1260 if (!DT->isReachableFromEntry(B))
1261 continue;
1262
1263 for (Instruction &I : llvm::make_early_inc_range(*B))
1264 if (GetElementPtrInst *GEP = dyn_cast<GetElementPtrInst>(&I))
1265 Changed |= splitGEP(GEP);
1266 // No need to split GEP ConstantExprs because all its indices are constant
1267 // already.
1268 }
1269
1270 Changed |= reuniteExts(F);
1271
1272 if (VerifyNoDeadCode)
1273 verifyNoDeadCode(F);
1274
1275 return Changed;
1276}
1277
1278Instruction *SeparateConstOffsetFromGEP::findClosestMatchingDominator(
1279 ExprKey Key, Instruction *Dominatee,
1280 DenseMap<ExprKey, SmallVector<Instruction *, 2>> &DominatingExprs) {
1281 auto Pos = DominatingExprs.find(Key);
1282 if (Pos == DominatingExprs.end())
1283 return nullptr;
1284
1285 auto &Candidates = Pos->second;
1286 // Because we process the basic blocks in pre-order of the dominator tree, a
1287 // candidate that doesn't dominate the current instruction won't dominate any
1288 // future instruction either. Therefore, we pop it out of the stack. This
1289 // optimization makes the algorithm O(n).
1290 while (!Candidates.empty()) {
1291 Instruction *Candidate = Candidates.back();
1292 if (DT->dominates(Candidate, Dominatee))
1293 return Candidate;
1294 Candidates.pop_back();
1295 }
1296 return nullptr;
1297}
1298
1299bool SeparateConstOffsetFromGEP::reuniteExts(Instruction *I) {
1300 if (!I->getType()->isIntOrIntVectorTy())
1301 return false;
1302
1303 // Dom: LHS+RHS
1304 // I: sext(LHS)+sext(RHS)
1305 // If Dom can't sign overflow and Dom dominates I, optimize I to sext(Dom).
1306 // TODO: handle zext
1307 Value *LHS = nullptr, *RHS = nullptr;
1308 if (match(I, m_Add(m_SExt(m_Value(LHS)), m_SExt(m_Value(RHS))))) {
1309 if (LHS->getType() == RHS->getType()) {
1310 ExprKey Key = createNormalizedCommutablePair(LHS, RHS);
1311 if (auto *Dom = findClosestMatchingDominator(Key, I, DominatingAdds)) {
1312 Instruction *NewSExt =
1313 new SExtInst(Dom, I->getType(), "", I->getIterator());
1314 NewSExt->takeName(I);
1315 I->replaceAllUsesWith(NewSExt);
1316 NewSExt->setDebugLoc(I->getDebugLoc());
1318 return true;
1319 }
1320 }
1321 } else if (match(I, m_Sub(m_SExt(m_Value(LHS)), m_SExt(m_Value(RHS))))) {
1322 if (LHS->getType() == RHS->getType()) {
1323 if (auto *Dom =
1324 findClosestMatchingDominator({LHS, RHS}, I, DominatingSubs)) {
1325 Instruction *NewSExt =
1326 new SExtInst(Dom, I->getType(), "", I->getIterator());
1327 NewSExt->takeName(I);
1328 I->replaceAllUsesWith(NewSExt);
1329 NewSExt->setDebugLoc(I->getDebugLoc());
1331 return true;
1332 }
1333 }
1334 }
1335
1336 // Add I to DominatingExprs if it's an add/sub that can't sign overflow.
1337 if (match(I, m_NSWAdd(m_Value(LHS), m_Value(RHS)))) {
1339 ExprKey Key = createNormalizedCommutablePair(LHS, RHS);
1340 DominatingAdds[Key].push_back(I);
1341 }
1342 } else if (match(I, m_NSWSub(m_Value(LHS), m_Value(RHS)))) {
1344 DominatingSubs[{LHS, RHS}].push_back(I);
1345 }
1346 return false;
1347}
1348
1349bool SeparateConstOffsetFromGEP::reuniteExts(Function &F) {
1350 bool Changed = false;
1351 DominatingAdds.clear();
1352 DominatingSubs.clear();
1353 for (const auto Node : depth_first(DT)) {
1354 BasicBlock *BB = Node->getBlock();
1355 for (Instruction &I : llvm::make_early_inc_range(*BB))
1356 Changed |= reuniteExts(&I);
1357 }
1358 return Changed;
1359}
1360
1361void SeparateConstOffsetFromGEP::verifyNoDeadCode(Function &F) {
1362 for (BasicBlock &B : F) {
1363 for (Instruction &I : B) {
1365 std::string ErrMessage;
1366 raw_string_ostream RSO(ErrMessage);
1367 RSO << "Dead instruction detected!\n" << I << "\n";
1368 llvm_unreachable(RSO.str().c_str());
1369 }
1370 }
1371 }
1372}
1373
1374bool SeparateConstOffsetFromGEP::isLegalToSwapOperand(
1375 GetElementPtrInst *FirstGEP, GetElementPtrInst *SecondGEP, Loop *CurLoop) {
1376 if (!FirstGEP || !FirstGEP->hasOneUse())
1377 return false;
1378
1379 if (!SecondGEP || FirstGEP->getParent() != SecondGEP->getParent())
1380 return false;
1381
1382 if (FirstGEP == SecondGEP)
1383 return false;
1384
1385 unsigned FirstNum = FirstGEP->getNumOperands();
1386 unsigned SecondNum = SecondGEP->getNumOperands();
1387 // Give up if the number of operands are not 2.
1388 if (FirstNum != SecondNum || FirstNum != 2)
1389 return false;
1390
1391 Value *FirstBase = FirstGEP->getOperand(0);
1392 Value *SecondBase = SecondGEP->getOperand(0);
1393 Value *FirstOffset = FirstGEP->getOperand(1);
1394 // Give up if the index of the first GEP is loop invariant.
1395 if (CurLoop->isLoopInvariant(FirstOffset))
1396 return false;
1397
1398 // Give up if base doesn't have same type.
1399 if (FirstBase->getType() != SecondBase->getType())
1400 return false;
1401
1402 Instruction *FirstOffsetDef = dyn_cast<Instruction>(FirstOffset);
1403
1404 // Check if the second operand of first GEP has constant coefficient.
1405 // For an example, for the following code, we won't gain anything by
1406 // hoisting the second GEP out because the second GEP can be folded away.
1407 // %scevgep.sum.ur159 = add i64 %idxprom48.ur, 256
1408 // %67 = shl i64 %scevgep.sum.ur159, 2
1409 // %uglygep160 = getelementptr i8* %65, i64 %67
1410 // %uglygep161 = getelementptr i8* %uglygep160, i64 -1024
1411
1412 // Skip constant shift instruction which may be generated by Splitting GEPs.
1413 if (FirstOffsetDef && FirstOffsetDef->isShift() &&
1414 isa<ConstantInt>(FirstOffsetDef->getOperand(1)))
1415 FirstOffsetDef = dyn_cast<Instruction>(FirstOffsetDef->getOperand(0));
1416
1417 // Give up if FirstOffsetDef is an Add or Sub with constant.
1418 // Because it may not profitable at all due to constant folding.
1419 if (FirstOffsetDef)
1420 if (BinaryOperator *BO = dyn_cast<BinaryOperator>(FirstOffsetDef)) {
1421 unsigned opc = BO->getOpcode();
1422 if ((opc == Instruction::Add || opc == Instruction::Sub) &&
1423 (isa<ConstantInt>(BO->getOperand(0)) ||
1425 return false;
1426 }
1427 return true;
1428}
1429
1430bool SeparateConstOffsetFromGEP::hasMoreThanOneUseInLoop(Value *V, Loop *L) {
1431 // TODO: Could look at uses of globals, but we need to make sure we are
1432 // looking at the correct function.
1433 if (isa<Constant>(V))
1434 return false;
1435
1436 int UsesInLoop = 0;
1437 for (User *U : V->users()) {
1438 if (Instruction *User = dyn_cast<Instruction>(U))
1439 if (L->contains(User))
1440 if (++UsesInLoop > 1)
1441 return true;
1442 }
1443 return false;
1444}
1445
1446void SeparateConstOffsetFromGEP::swapGEPOperand(GetElementPtrInst *First,
1447 GetElementPtrInst *Second) {
1448 Value *Offset1 = First->getOperand(1);
1449 Value *Offset2 = Second->getOperand(1);
1450 First->setOperand(1, Offset2);
1451 Second->setOperand(1, Offset1);
1452
1453 // We changed p+o+c to p+c+o, p+c may not be inbound anymore.
1454 const DataLayout &DAL = First->getDataLayout();
1455 APInt Offset(DAL.getIndexSizeInBits(
1456 cast<PointerType>(First->getType())->getAddressSpace()),
1457 0);
1458 Value *NewBase =
1460 uint64_t ObjectSize;
1461 if (!getObjectSize(NewBase, ObjectSize, DAL, TLI) ||
1462 Offset.ugt(ObjectSize)) {
1463 // TODO(gep_nowrap): Make flag preservation more precise.
1464 First->setNoWrapFlags(GEPNoWrapFlags::none());
1466 } else
1467 First->setIsInBounds(true);
1468}
1469
1471 raw_ostream &OS, function_ref<StringRef(StringRef)> MapClassName2PassName) {
1473 ->printPipeline(OS, MapClassName2PassName);
1474 OS << '<';
1475 if (LowerGEP)
1476 OS << "lower-gep";
1477 OS << '>';
1478}
1479
1482 auto *DT = &AM.getResult<DominatorTreeAnalysis>(F);
1483 auto *LI = &AM.getResult<LoopAnalysis>(F);
1484 auto *TLI = &AM.getResult<TargetLibraryAnalysis>(F);
1485 auto GetTTI = [&AM](Function &F) -> TargetTransformInfo & {
1486 return AM.getResult<TargetIRAnalysis>(F);
1487 };
1488 SeparateConstOffsetFromGEP Impl(DT, LI, TLI, GetTTI, LowerGEP);
1489 if (!Impl.run(F))
1490 return PreservedAnalyses::all();
1493 return PA;
1494}
assert(UImm &&(UImm !=~static_cast< T >(0)) &&"Invalid immediate!")
This file implements a class to represent arbitrary precision integral constant values and operations...
MachineBasicBlock MachineBasicBlock::iterator DebugLoc DL
static GCRegistry::Add< ErlangGC > A("erlang", "erlang-compatible garbage collector")
static GCRegistry::Add< CoreCLRGC > E("coreclr", "CoreCLR-compatible GC")
static GCRegistry::Add< OcamlGC > B("ocaml", "ocaml 3.10-compatible GC")
This file contains the declarations for the subclasses of Constant, which represent the different fla...
This file defines the DenseMap class.
This file builds on the ADT/GraphTraits.h file to build generic depth first graph iterator.
static bool runOnFunction(Function &F, bool PostInlining)
Hexagon Common GEP
Module.h This file contains the declarations for the Module class.
This header defines various interfaces for pass management in LLVM.
static const T * Find(StringRef S, ArrayRef< T > A)
Find KV in array using binary search.
#define F(x, y, z)
Definition MD5.cpp:55
#define I(x, y, z)
Definition MD5.cpp:58
OptimizedStructLayoutField Field
#define INITIALIZE_PASS_DEPENDENCY(depName)
Definition PassSupport.h:42
#define INITIALIZE_PASS_END(passName, arg, name, cfg, analysis)
Definition PassSupport.h:44
#define INITIALIZE_PASS_BEGIN(passName, arg, name, cfg, analysis)
Definition PassSupport.h:39
static cl::opt< bool > DisableSeparateConstOffsetFromGEP("disable-separate-const-offset-from-gep", cl::init(false), cl::desc("Do not separate the constant offset from a GEP instruction"), cl::Hidden)
static bool allowsPreservingNUW(const User *U)
A helper function to check if reassociating through an entry in the user chain would invalidate the G...
static cl::opt< bool > VerifyNoDeadCode("reassociate-geps-verify-no-dead-code", cl::init(false), cl::desc("Verify this pass produces no dead code"), cl::Hidden)
This file defines the SmallVector class.
static SymbolRef::Type getType(const Symbol *Sym)
Definition TapiFile.cpp:39
This pass exposes codegen information to IR-level passes.
Value * RHS
Value * LHS
LLVM_ABI APInt zext(unsigned width) const
Zero extend to a new width.
Definition APInt.cpp:1012
LLVM_ABI APInt trunc(unsigned width) const
Truncate to new width.
Definition APInt.cpp:936
bool isZero() const
Determine if this value is zero, i.e. all bits are clear.
Definition APInt.h:380
unsigned logBase2() const
Definition APInt.h:1761
LLVM_ABI APInt sext(unsigned width) const
Sign extend to a new width.
Definition APInt.cpp:985
bool isPowerOf2() const
Check if this APInt's value is a power of two greater than zero.
Definition APInt.h:440
static APInt getZero(unsigned numBits)
Get the '0' value for the specified bit-width.
Definition APInt.h:200
int64_t getSExtValue() const
Get sign extended value.
Definition APInt.h:1562
PassT::Result & getResult(IRUnitT &IR, ExtraArgTs... ExtraArgs)
Get the result of an analysis pass for a given IR unit.
AnalysisUsage & addRequired()
LLVM_ABI void setPreservesCFG()
This function should be called by the pass, iff they do not:
Definition Pass.cpp:270
InstListType::iterator iterator
Instruction iterators...
Definition BasicBlock.h:170
BinaryOps getOpcode() const
Definition InstrTypes.h:374
static LLVM_ABI BinaryOperator * Create(BinaryOps Op, Value *S1, Value *S2, const Twine &Name=Twine(), InsertPosition InsertBefore=nullptr)
Construct a binary instruction, given the opcode and the two operands.
Represents analyses that only rely on functions' control flow.
Definition Analysis.h:73
static LLVM_ABI CastInst * CreateIntegerCast(Value *S, Type *Ty, bool isSigned, const Twine &Name="", InsertPosition InsertBefore=nullptr)
Create a ZExt, BitCast, or Trunc for int -> int casts.
const APInt & getValue() const
Return the constant as an APInt value reference.
Definition Constants.h:154
unsigned getIndexSizeInBits(unsigned AS) const
The size in bits of indices used for address calculation in getelementptr and for addresses in the gi...
Definition DataLayout.h:396
Analysis pass which computes a DominatorTree.
Definition Dominators.h:284
Legacy analysis pass which computes a DominatorTree.
Definition Dominators.h:322
LLVM_ABI bool isReachableFromEntry(const Use &U) const
Provide an overload for a Use.
LLVM_ABI bool dominates(const BasicBlock *BB, const Use &U) const
Return true if the (end of the) basic block BB dominates the use U.
FunctionPass class - This class is used to implement most global optimizations.
Definition Pass.h:314
static GEPNoWrapFlags inBounds()
static GEPNoWrapFlags noUnsignedWrap()
static GEPNoWrapFlags noUnsignedSignedWrap()
static GEPNoWrapFlags none()
an instruction for type-safe pointer arithmetic to access elements of arrays and structs
LLVM_ABI void setNoWrapFlags(GEPNoWrapFlags NW)
Set nowrap flags for GEP instruction.
LLVM_ABI bool hasNoUnsignedWrap() const LLVM_READONLY
Determine whether the no unsigned wrap flag is set.
LLVM_ABI bool hasNoSignedWrap() const LLVM_READONLY
Determine whether the no signed wrap flag is set.
LLVM_ABI void insertBefore(InstListType::iterator InsertPos)
Insert an unlinked instruction into a basic block immediately before the specified position.
bool isShift() const
LLVM_ABI void dropPoisonGeneratingFlags()
Drops flags that may cause this instruction to evaluate to poison despite having non-poison inputs.
void setDebugLoc(DebugLoc Loc)
Set the debug location information for this instruction.
LLVM_ABI void copyMetadata(const Instruction &SrcInst, ArrayRef< unsigned > WL=ArrayRef< unsigned >())
Copy metadata from SrcInst to this instruction.
Analysis pass that exposes the LoopInfo for a function.
Definition LoopInfo.h:569
LoopT * getLoopFor(const BlockT *BB) const
Return the inner most loop that BB lives in.
The legacy pass manager's analysis pass to compute loop information.
Definition LoopInfo.h:596
bool isLoopInvariant(const Value *V) const
Return true if the specified value is loop invariant.
Definition LoopInfo.cpp:61
static LLVM_ABI PassRegistry * getPassRegistry()
getPassRegistry - Access the global registry object, which is automatically initialized at applicatio...
A set of analyses that are preserved following a run of a transformation pass.
Definition Analysis.h:112
static PreservedAnalyses all()
Construct a special preserved set that preserves all passes.
Definition Analysis.h:118
PreservedAnalyses & preserveSet()
Mark an analysis set as preserved.
Definition Analysis.h:151
void printPipeline(raw_ostream &OS, function_ref< StringRef(StringRef)> MapClassName2PassName)
PreservedAnalyses run(Function &F, FunctionAnalysisManager &)
StringRef - Represent a constant reference to a string, i.e.
Definition StringRef.h:55
Analysis pass providing the TargetTransformInfo.
Analysis pass providing the TargetLibraryInfo.
Wrapper pass for TargetTransformInfo.
This pass provides access to the codegen interfaces that are needed for IR-level transformations.
LLVM_ABI bool isLegalAddressingMode(Type *Ty, GlobalValue *BaseGV, int64_t BaseOffset, bool HasBaseReg, int64_t Scale, unsigned AddrSpace=0, Instruction *I=nullptr, int64_t ScalableOffset=0) const
Return true if the addressing mode represented by AM is legal for this target, for a load/store of th...
This class represents a truncation of integer types.
LLVM_ABI unsigned getIntegerBitWidth() const
LLVM_ABI bool isScalableTy(SmallPtrSetImpl< const Type * > &Visited) const
Return true if this is a type whose size is a known multiple of vscale.
Definition Type.cpp:62
LLVM_ABI unsigned getScalarSizeInBits() const LLVM_READONLY
If this is a vector type, return the getPrimitiveSizeInBits value for the element type.
Definition Type.cpp:231
Use * op_iterator
Definition User.h:279
void setOperand(unsigned i, Value *Val)
Definition User.h:237
Value * getOperand(unsigned i) const
Definition User.h:232
unsigned getNumOperands() const
Definition User.h:254
LLVM Value Representation.
Definition Value.h:75
Type * getType() const
All values are typed, get the type of this value.
Definition Value.h:256
const Value * stripAndAccumulateInBoundsConstantOffsets(const DataLayout &DL, APInt &Offset) const
This is a wrapper around stripAndAccumulateConstantOffsets with the in-bounds requirement set to fals...
Definition Value.h:759
bool hasOneUse() const
Return true if there is exactly one use of this value.
Definition Value.h:439
bool use_empty() const
Definition Value.h:346
LLVM_ABI StringRef getName() const
Return a constant reference to the value's name.
Definition Value.cpp:322
LLVM_ABI void takeName(Value *V)
Transfer the name from V to this value.
Definition Value.cpp:396
An efficient, type-erasing, non-owning reference to a callable.
TypeSize getSequentialElementStride(const DataLayout &DL) const
const ParentTy * getParent() const
Definition ilist_node.h:34
This class implements an extremely fast bulk output stream that can only output to a stream.
Definition raw_ostream.h:53
Changed
#define llvm_unreachable(msg)
Marks that the current location is not supposed to be reachable.
@ C
The default llvm calling convention, compatible with C.
Definition CallingConv.h:34
@ BasicBlock
Various leaf nodes.
Definition ISDOpcodes.h:81
PtrAdd_match< PointerOpTy, OffsetOpTy > m_PtrAdd(const PointerOpTy &PointerOp, const OffsetOpTy &OffsetOp)
Matches GEP with i8 source element type.
BinaryOp_match< LHS, RHS, Instruction::Add > m_Add(const LHS &L, const RHS &R)
BinaryOp_match< LHS, RHS, Instruction::Xor > m_Xor(const LHS &L, const RHS &R)
OverflowingBinaryOp_match< LHS, RHS, Instruction::Sub, OverflowingBinaryOperator::NoSignedWrap > m_NSWSub(const LHS &L, const RHS &R)
bool match(Val *V, const Pattern &P)
class_match< ConstantInt > m_ConstantInt()
Match an arbitrary ConstantInt and ignore it.
apint_match m_APInt(const APInt *&Res)
Match a ConstantInt or splatted ConstantVector, binding the specified pointer to the contained APInt.
class_match< Value > m_Value()
Match an arbitrary value and ignore it.
OverflowingBinaryOp_match< LHS, RHS, Instruction::Add, OverflowingBinaryOperator::NoSignedWrap > m_NSWAdd(const LHS &L, const RHS &R)
CastInst_match< OpTy, SExtInst > m_SExt(const OpTy &Op)
Matches SExt.
BinaryOp_match< LHS, RHS, Instruction::Sub > m_Sub(const LHS &L, const RHS &R)
initializer< Ty > init(const Ty &Val)
PointerTypeMap run(const Module &M)
Compute the PointerTypeMap for the module M.
@ User
could "use" a pointer
NodeAddr< NodeBase * > Node
Definition RDFGraph.h:381
friend class Instruction
Iterator for Instructions in a `BasicBlock.
Definition BasicBlock.h:73
This is an optimization pass for GlobalISel generic memory operations.
@ Offset
Definition DWP.cpp:477
FunctionAddr VTableAddr Value
Definition InstrProf.h:137
auto find(R &&Range, const T &Val)
Provide wrappers to std::find which take ranges instead of having to pass begin/end explicitly.
Definition STLExtras.h:1731
bool all_of(R &&range, UnaryPredicate P)
Provide wrappers to std::all_of which take ranges instead of having to pass begin/end explicitly.
Definition STLExtras.h:1705
LLVM_ABI bool RecursivelyDeleteTriviallyDeadInstructions(Value *V, const TargetLibraryInfo *TLI=nullptr, MemorySSAUpdater *MSSAU=nullptr, std::function< void(Value *)> AboutToDeleteCallback=std::function< void(Value *)>())
If the specified value is a trivially dead instruction, delete it.
Definition Local.cpp:533
decltype(auto) dyn_cast(const From &Val)
dyn_cast<X> - Return the argument parameter cast to the specified type.
Definition Casting.h:649
iterator_range< early_inc_iterator_impl< detail::IterOfRange< RangeT > > > make_early_inc_range(RangeT &&Range)
Make a range that does early increment to allow mutation of the underlying range without disrupting i...
Definition STLExtras.h:634
LLVM_ABI void initializeSeparateConstOffsetFromGEPLegacyPassPass(PassRegistry &)
auto dyn_cast_or_null(const Y &Val)
Definition Casting.h:759
LLVM_ABI bool isInstructionTriviallyDead(Instruction *I, const TargetLibraryInfo *TLI=nullptr)
Return true if the result produced by the instruction is not used, and the instruction will return.
Definition Local.cpp:402
LLVM_ABI bool getObjectSize(const Value *Ptr, uint64_t &Size, const DataLayout &DL, const TargetLibraryInfo *TLI, ObjectSizeOpts Opts={})
Compute the size of the object pointed by Ptr.
auto reverse(ContainerTy &&C)
Definition STLExtras.h:408
LLVM_ABI void computeKnownBits(const Value *V, KnownBits &Known, const DataLayout &DL, AssumptionCache *AC=nullptr, const Instruction *CxtI=nullptr, const DominatorTree *DT=nullptr, bool UseInstrInfo=true, unsigned Depth=0)
Determine which bits of V are known to be either zero or one and return them in the KnownZero/KnownOn...
LLVM_ABI bool programUndefinedIfPoison(const Instruction *Inst)
generic_gep_type_iterator<> gep_type_iterator
LLVM_ABI Constant * ConstantFoldCastOperand(unsigned Opcode, Constant *C, Type *DestTy, const DataLayout &DL)
Attempt to constant fold a cast with the specified operand.
class LLVM_GSL_OWNER SmallVector
Forward declaration of SmallVector so that calculateSmallVectorDefaultInlinedElements can reference s...
bool isa(const From &Val)
isa<X> - Return true if the parameter to the template is an instance of one of the template type argu...
Definition Casting.h:548
LLVM_ATTRIBUTE_VISIBILITY_DEFAULT AnalysisKey InnerAnalysisManagerProxy< AnalysisManagerT, IRUnitT, ExtraArgTs... >::Key
LLVM_ABI FunctionPass * createSeparateConstOffsetFromGEPPass(bool LowerGEP=false)
@ First
Helpers to iterate all locations in the MemoryEffectsBase class.
Definition ModRef.h:71
TargetTransformInfo TTI
IRBuilder(LLVMContext &, FolderTy, InserterTy, MDNode *, ArrayRef< OperandBundleDef >) -> IRBuilder< FolderTy, InserterTy >
constexpr unsigned BitWidth
decltype(auto) cast(const From &Val)
cast<X> - Return the argument parameter cast to the specified type.
Definition Casting.h:565
gep_type_iterator gep_type_begin(const User *GEP)
iterator_range< df_iterator< T > > depth_first(const T &G)
AnalysisManager< Function > FunctionAnalysisManager
Convenience typedef for the Function analysis manager.
LLVM_ABI bool isKnownNonNegative(const Value *V, const SimplifyQuery &SQ, unsigned Depth=0)
Returns true if the give value is known to be non-negative.
A CRTP mix-in to automatically provide informational APIs needed for passes.
Definition PassManager.h:70