Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[mlir][vector] Standardize base Naming Across Vector Ops (NFC) #137859

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

banach-space
Copy link
Contributor

This change standardizes the naming convention for the argument
representing the value to read from or write to in Vector ops that
interface with Tensors or MemRefs. Specifically, it ensures that all
such ops use the name base (i.e., the base address or location to
which offsets are applied).

Updated operations:

  • vector.transfer_read
  • vector.transfer_write

For reference, these ops already use base:

  • vector.load, vector.store, vector.scatter, vector.gather,
    vector.expandload, vector.compressstore, vector.maskedstore,
    vector.maskedload

This is a non-functional change (NFC) and does not alter the semantics
of these operations.

Implements #131602

@llvmbot
Copy link
Member

llvmbot commented Apr 29, 2025

@llvm/pr-subscribers-mlir-sme
@llvm/pr-subscribers-mlir
@llvm/pr-subscribers-mlir-vector

@llvm/pr-subscribers-backend-amdgpu

Author: Andrzej Warzyński (banach-space)

Changes

This change standardizes the naming convention for the argument
representing the value to read from or write to in Vector ops that
interface with Tensors or MemRefs. Specifically, it ensures that all
such ops use the name base (i.e., the base address or location to
which offsets are applied).

Updated operations:

  • vector.transfer_read
  • vector.transfer_write

For reference, these ops already use base:

  • vector.load, vector.store, vector.scatter, vector.gather,
    vector.expandload, vector.compressstore, vector.maskedstore,
    vector.maskedload

This is a non-functional change (NFC) and does not alter the semantics
of these operations.

Implements #131602


Patch is 64.10 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137859.diff

29 Files Affected:

  • (modified) mlir/include/mlir/Dialect/Vector/IR/VectorOps.td (+7-7)
  • (modified) mlir/include/mlir/Interfaces/VectorInterfaces.td (+2-3)
  • (modified) mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp (+6-6)
  • (modified) mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp (+8-8)
  • (modified) mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp (+10-10)
  • (modified) mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp (+2-2)
  • (modified) mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp (+2-2)
  • (modified) mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+3-3)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExtractAddressComputations.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/FoldMemRefAliasOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/TransformOps/NVGPUTransformOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/Transforms/Utils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Tensor/Transforms/FoldTensorSubsetOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+16-16)
  • (modified) mlir/lib/Dialect/Vector/Transforms/BufferizableOpInterfaceImpl.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorMask.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp (+8-8)
  • (modified) mlir/lib/Dialect/Vector/Transforms/SubsetOpInterfaceImpl.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorEmulateNarrowType.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp (+19-19)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransferSplitRewritePatterns.cpp (+10-10)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransforms.cpp (+7-7)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorUnroll.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
index d7518943229ea..137b18d4bfa75 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
@@ -1273,7 +1273,7 @@ def Vector_TransferReadOp :
       AttrSizedOperandSegments,
       DestinationStyleOpInterface
     ]>,
-    Arguments<(ins AnyShaped:$source,
+    Arguments<(ins AnyShaped:$base,
                    Variadic<Index>:$indices,
                    AffineMapAttr:$permutation_map,
                    AnyType:$padding,
@@ -1470,26 +1470,26 @@ def Vector_TransferReadOp :
   let builders = [
     /// 1. Builder that sets padding to zero and an empty mask (variant with attrs).
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "AffineMapAttr":$permutationMapAttr,
                    "ArrayAttr":$inBoundsAttr)>,
     /// 2. Builder that sets padding to zero and an empty mask (variant without attrs).
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "AffineMap":$permutationMap,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
     /// 3. Builder that sets permutation map to 'getMinorIdentityMap'.
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "Value":$padding,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
     /// 4. Builder that sets padding to zero and permutation map to
     /// 'getMinorIdentityMap'.
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
   ];
@@ -1522,7 +1522,7 @@ def Vector_TransferWriteOp :
       DestinationStyleOpInterface
   ]>,
     Arguments<(ins AnyVectorOfAnyRank:$valueToStore,
-                   AnyShaped:$source,
+                   AnyShaped:$base,
                    Variadic<Index>:$indices,
                    AffineMapAttr:$permutation_map,
                    Optional<VectorOfNonZeroRankOf<[I1]>>:$mask,
@@ -1663,7 +1663,7 @@ def Vector_TransferWriteOp :
     ///  ops of other dialects.
     Value getValue() { return getVector(); }
 
-    MutableOperandRange getDpsInitsMutable() { return getSourceMutable(); }
+    MutableOperandRange getDpsInitsMutable() { return getBaseMutable(); }
   }];
 
   let hasFolder = 1;
diff --git a/mlir/include/mlir/Interfaces/VectorInterfaces.td b/mlir/include/mlir/Interfaces/VectorInterfaces.td
index 8ea9d925b3790..7092359127852 100644
--- a/mlir/include/mlir/Interfaces/VectorInterfaces.td
+++ b/mlir/include/mlir/Interfaces/VectorInterfaces.td
@@ -108,10 +108,9 @@ def VectorTransferOpInterface : OpInterface<"VectorTransferOpInterface"> {
         on. In case of a "read" operation, that's the source from which the
         operation reads. In case of a "write" operation, that's the destination
         into which the operation writes.
-        TODO: Change name of operand, which is not accurate for xfer_write.
       }],
       /*retTy=*/"::mlir::Value",
-      /*methodName=*/"getSource",
+      /*methodName=*/"getBase",
       /*args=*/(ins)
     >,
     InterfaceMethod<
@@ -203,7 +202,7 @@ def VectorTransferOpInterface : OpInterface<"VectorTransferOpInterface"> {
 
     /// Return the shaped type of the "source" operand value.
     ::mlir::ShapedType getShapedType() {
-      return ::llvm::cast<::mlir::ShapedType>($_op.getSource().getType());
+      return ::llvm::cast<::mlir::ShapedType>($_op.getBase().getType());
     }
 
     /// Return the number of dimensions that participate in the permutation map.
diff --git a/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp b/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
index 58b85bc0ea6ac..d6f9495b2567c 100644
--- a/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
+++ b/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
@@ -58,7 +58,7 @@ struct TransferReadToArmSMELowering
       return rewriter.notifyMatchFailure(transferReadOp,
                                          "not a valid vector type for SME");
 
-    if (!llvm::isa<MemRefType>(transferReadOp.getSource().getType()))
+    if (!llvm::isa<MemRefType>(transferReadOp.getBase().getType()))
       return rewriter.notifyMatchFailure(transferReadOp, "not a memref source");
 
     // Out-of-bounds dims are not supported.
@@ -84,7 +84,7 @@ struct TransferReadToArmSMELowering
     auto mask = transferReadOp.getMask();
     auto padding = mask ? transferReadOp.getPadding() : nullptr;
     rewriter.replaceOpWithNewOp<arm_sme::TileLoadOp>(
-        transferReadOp, vectorType, transferReadOp.getSource(),
+        transferReadOp, vectorType, transferReadOp.getBase(),
         transferReadOp.getIndices(), padding, mask, layout);
 
     return success();
@@ -128,7 +128,7 @@ struct TransferWriteToArmSMELowering
     if (!arm_sme::isValidSMETileVectorType(vType))
       return failure();
 
-    if (!llvm::isa<MemRefType>(writeOp.getSource().getType()))
+    if (!llvm::isa<MemRefType>(writeOp.getBase().getType()))
       return failure();
 
     // Out-of-bounds dims are not supported.
@@ -149,7 +149,7 @@ struct TransferWriteToArmSMELowering
                    : arm_sme::TileSliceLayout::Horizontal;
 
     rewriter.replaceOpWithNewOp<arm_sme::TileStoreOp>(
-        writeOp, writeOp.getVector(), writeOp.getSource(), writeOp.getIndices(),
+        writeOp, writeOp.getVector(), writeOp.getBase(), writeOp.getIndices(),
         writeOp.getMask(), layout);
     return success();
   }
@@ -686,7 +686,7 @@ struct FoldTransferWriteOfExtractTileSlice
 
   LogicalResult matchAndRewrite(vector::TransferWriteOp writeOp,
                                 PatternRewriter &rewriter) const final {
-    if (!isa<MemRefType>(writeOp.getSource().getType()))
+    if (!isa<MemRefType>(writeOp.getBase().getType()))
       return rewriter.notifyMatchFailure(writeOp, "destination not a memref");
 
     if (writeOp.hasOutOfBoundsDim())
@@ -713,7 +713,7 @@ struct FoldTransferWriteOfExtractTileSlice
 
     rewriter.replaceOpWithNewOp<arm_sme::StoreTileSliceOp>(
         writeOp, extractTileSlice.getTile(),
-        extractTileSlice.getTileSliceIndex(), mask, writeOp.getSource(),
+        extractTileSlice.getTileSliceIndex(), mask, writeOp.getBase(),
         writeOp.getIndices(), extractTileSlice.getLayout());
     return success();
   }
diff --git a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
index ba05a5a000cb9..0b9ebdc0d66bb 100644
--- a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
+++ b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
@@ -486,7 +486,7 @@ struct CombineTransferReadOpTranspose final
     Value result =
         rewriter
             .create<vector::TransferReadOp>(
-                loc, resultType, transferReadOp.getSource(),
+                loc, resultType, transferReadOp.getBase(),
                 transferReadOp.getIndices(), AffineMapAttr::get(newMap),
                 transferReadOp.getPadding(), transferReadOp.getMask(),
                 transferReadOp.getInBoundsAttr())
@@ -581,7 +581,7 @@ convertTransferReadOp(RewriterBase &rewriter, vector::TransferReadOp op,
   gpu::MMAMatrixType type =
       gpu::MMAMatrixType::get(op.getVectorType().getShape(), elType, fragType);
   Value load = rewriter.create<gpu::SubgroupMmaLoadMatrixOp>(
-      op.getLoc(), type, op.getSource(), op.getIndices(),
+      op.getLoc(), type, op.getBase(), op.getIndices(),
       rewriter.getIndexAttr(*stride),
       isTranspose ? rewriter.getUnitAttr() : UnitAttr());
   valueMapping[mappingResult] = load;
@@ -612,7 +612,7 @@ convertTransferWriteOp(RewriterBase &rewriter, vector::TransferWriteOp op,
 
   Value matrix = it->second;
   auto store = rewriter.create<gpu::SubgroupMmaStoreMatrixOp>(
-      op.getLoc(), matrix, op.getSource(), op.getIndices(),
+      op.getLoc(), matrix, op.getBase(), op.getIndices(),
       rewriter.getIndexAttr(*stride), /*transpose=*/UnitAttr());
   (void)store;
 
@@ -759,7 +759,7 @@ creatLdMatrixCompatibleLoads(RewriterBase &rewriter, vector::TransferReadOp op,
                                          indices);
 
   nvgpu::LdMatrixOp newOp = rewriter.create<nvgpu::LdMatrixOp>(
-      loc, vectorType, op.getSource(), indices, *transpose, params->numTiles);
+      loc, vectorType, op.getBase(), indices, *transpose, params->numTiles);
   valueMapping[op] = newOp->getResult(0);
   return success();
 }
@@ -819,7 +819,7 @@ createNonLdMatrixLoads(RewriterBase &rewriter, vector::TransferReadOp op,
           rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
 
       Value el = rewriter.create<vector::LoadOp>(loc, loadedElType,
-                                                 op.getSource(), newIndices);
+                                                 op.getBase(), newIndices);
       result = rewriter.create<vector::InsertOp>(loc, el, result, i);
     }
   } else {
@@ -842,7 +842,7 @@ createNonLdMatrixLoads(RewriterBase &rewriter, vector::TransferReadOp op,
         getXferIndices<vector::TransferReadOp>(
             rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
         Value el = rewriter.create<memref::LoadOp>(op.getLoc(), loadedElType,
-                                                   op.getSource(), newIndices);
+                                                   op.getBase(), newIndices);
         result = rewriter.create<vector::InsertOp>(
             op.getLoc(), el, result, ArrayRef<int64_t>{i, innerIdx});
       }
@@ -876,7 +876,7 @@ convertTransferReadToLoads(RewriterBase &rewriter, vector::TransferReadOp op,
     return rewriter.notifyMatchFailure(op, "no warpMatrixInfo");
 
   bool isLdMatrixCompatible =
-      isSharedMemory(cast<MemRefType>(op.getSource().getType())) &&
+      isSharedMemory(cast<MemRefType>(op.getBase().getType())) &&
       nvgpu::inferTileWidthInBits(*warpMatrixInfo) == 128;
 
   VectorType vecTy = op.getVectorType();
@@ -934,7 +934,7 @@ convertTransferWriteToStores(RewriterBase &rewriter, vector::TransferWriteOp op,
     SmallVector<Value, 4> newIndices;
     getXferIndices<vector::TransferWriteOp>(
         rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
-    rewriter.create<vector::StoreOp>(loc, el, op.getSource(), newIndices);
+    rewriter.create<vector::StoreOp>(loc, el, op.getBase(), newIndices);
   }
 
   LLVM_DEBUG(DBGS() << "erase: " << op << "\n");
diff --git a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
index b9b598c02b4a2..b55fe306d9829 100644
--- a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
+++ b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
@@ -199,7 +199,7 @@ static Value generateInBoundsCheck(
   ImplicitLocOpBuilder lb(xferOp.getLoc(), b);
   if (!xferOp.isDimInBounds(0) && !isBroadcast) {
     Value memrefDim =
-        vector::createOrFoldDimOp(b, loc, xferOp.getSource(), *dim);
+        vector::createOrFoldDimOp(b, loc, xferOp.getBase(), *dim);
     AffineExpr d0, d1;
     bindDims(xferOp.getContext(), d0, d1);
     Value base = xferOp.getIndices()[*dim];
@@ -426,7 +426,7 @@ struct Strategy<TransferReadOp> {
     auto vecType = dyn_cast<VectorType>(bufferType.getElementType());
     auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
     auto newXferOp = b.create<vector::TransferReadOp>(
-        loc, vecType, xferOp.getSource(), xferIndices,
+        loc, vecType, xferOp.getBase(), xferIndices,
         AffineMapAttr::get(unpackedPermutationMap(b, xferOp)),
         xferOp.getPadding(), Value(), inBoundsAttr);
 
@@ -512,7 +512,7 @@ struct Strategy<TransferWriteOp> {
     Location loc = xferOp.getLoc();
     auto vec = b.create<memref::LoadOp>(loc, buffer, loadIndices);
     auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
-    auto source = loopState.empty() ? xferOp.getSource() : loopState[0];
+    auto source = loopState.empty() ? xferOp.getBase() : loopState[0];
     Type type = isTensorOp(xferOp) ? xferOp.getShapedType() : Type();
     auto newXferOp = b.create<vector::TransferWriteOp>(
         loc, type, vec, source, xferIndices,
@@ -544,7 +544,7 @@ struct Strategy<TransferWriteOp> {
 
   /// Return the initial loop state for the generated scf.for loop.
   static Value initialLoopState(TransferWriteOp xferOp) {
-    return isTensorOp(xferOp) ? xferOp.getSource() : Value();
+    return isTensorOp(xferOp) ? xferOp.getBase() : Value();
   }
 };
 
@@ -1145,7 +1145,7 @@ struct ScalableTransposeTransferWriteConversion
           ArrayRef<OpFoldResult>(*maskDims).drop_front());
     }
 
-    Value initDest = isTensorOp(writeOp) ? writeOp.getSource() : Value{};
+    Value initDest = isTensorOp(writeOp) ? writeOp.getBase() : Value{};
     ValueRange initLoopArgs = initDest ? initDest : ValueRange{};
     auto result = rewriter.create<scf::ForOp>(
         loc, lb, ub, step, initLoopArgs,
@@ -1165,7 +1165,7 @@ struct ScalableTransposeTransferWriteConversion
 
           // Create the transfer_write for the slice.
           Value dest =
-              loopIterArgs.empty() ? writeOp.getSource() : loopIterArgs.front();
+              loopIterArgs.empty() ? writeOp.getBase() : loopIterArgs.front();
           auto newWriteOp = b.create<vector::TransferWriteOp>(
               loc, sliceVec, dest, xferIndices,
               ArrayRef<bool>(writeOp.getInBoundsValues()).drop_front());
@@ -1340,7 +1340,7 @@ struct UnrollTransferReadConversion
 
             auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
             auto newXferOp = b.create<vector::TransferReadOp>(
-                loc, newXferVecType, xferOp.getSource(), xferIndices,
+                loc, newXferVecType, xferOp.getBase(), xferIndices,
                 AffineMapAttr::get(unpackedPermutationMap(b, xferOp)),
                 xferOp.getPadding(), Value(), inBoundsAttr);
             maybeAssignMask(b, xferOp, newXferOp, i);
@@ -1449,7 +1449,7 @@ struct UnrollTransferWriteConversion
     }
 
     int64_t dimSize = inputVectorTy.getShape()[0];
-    Value source = xferOp.getSource(); // memref or tensor to be written to.
+    Value source = xferOp.getBase(); // memref or tensor to be written to.
     auto sourceType = isTensorOp(xferOp) ? xferOp.getShapedType() : Type();
 
     // Generate fully unrolled loop of transfer ops.
@@ -1568,7 +1568,7 @@ struct Strategy1d<TransferReadOp> {
         /*inBoundsCase=*/
         [&](OpBuilder &b, Location loc) {
           Value val =
-              b.create<memref::LoadOp>(loc, xferOp.getSource(), indices);
+              b.create<memref::LoadOp>(loc, xferOp.getBase(), indices);
           return b.create<vector::InsertElementOp>(loc, val, vec, iv);
         },
         /*outOfBoundsCase=*/
@@ -1599,7 +1599,7 @@ struct Strategy1d<TransferWriteOp> {
         /*inBoundsCase=*/[&](OpBuilder &b, Location loc) {
           auto val =
               b.create<vector::ExtractElementOp>(loc, xferOp.getVector(), iv);
-          b.create<memref::StoreOp>(loc, val, xferOp.getSource(), indices);
+          b.create<memref::StoreOp>(loc, val, xferOp.getBase(), indices);
         });
     b.create<scf::YieldOp>(loc);
   }
diff --git a/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp b/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
index 0bc0f2fca2c3b..30145ea322e79 100644
--- a/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
+++ b/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
@@ -192,7 +192,7 @@ struct TransferReadLowering : public OpRewritePattern<vector::TransferReadOp> {
 
     xegpu::CreateNdDescOp ndDesc =
         createNdDescriptor(rewriter, loc, descType,
-                           dyn_cast<TypedValue<MemRefType>>(readOp.getSource()),
+                           dyn_cast<TypedValue<MemRefType>>(readOp.getBase()),
                            readOp.getIndices());
 
     DenseI64ArrayAttr transposeAttr =
@@ -233,7 +233,7 @@ struct TransferWriteLowering
         xegpu::MemorySpace::Global);
     xegpu::CreateNdDescOp ndDesc = createNdDescriptor(
         rewriter, loc, descType,
-        dyn_cast<TypedValue<MemRefType>>(writeOp.getSource()),
+        dyn_cast<TypedValue<MemRefType>>(writeOp.getBase()),
         writeOp.getIndices());
 
     // By default, no specific caching policy is assigned.
diff --git a/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp b/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
index 9f64abb5a8860..cd41765dec2a2 100644
--- a/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
+++ b/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
@@ -118,7 +118,7 @@ static Value createVectorLoadForMaskedLoad(OpBuilder &builder, Location loc,
   Value fill = builder.create<vector::SplatOp>(loc, unbroadcastedVectorType,
                                                readOp.getPadding());
   Value load = builder.create<vector::LoadOp>(
-      loc, unbroadcastedVectorType, readOp.getSource(), readOp.getIndices());
+      loc, unbroadcastedVectorType, readOp.getBase(), readOp.getIndices());
   Value res = builder.create<arith::SelectOp>(loc, unbroadcastedVectorType,
                                               readOp.getMask(), load, fill);
   // Insert a broadcasting op if required.
@@ -149,7 +149,7 @@ struct TransferReadLowering final : OpRewritePattern<vector::TransferReadOp> {
     }
 
     Location loc = readOp.getLoc();
-    Value src = readOp.getSource();
+    Value src = readOp.getBase();
 
     VectorType vectorType = readOp.getVectorType();
     int64_t vectorSize = vectorType.getNumElements();
diff --git a/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp b/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
index 62a148d2b7e62..95965872f4098 100644
--- a/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
+++ b/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
@@ -315,7 +315,7 @@ struct LegalizeTransferReadOpsByDecomposition
          decomposeToSMETiles(rewriter, vectorType, smeTileType, transposed)) {
       auto smeMask = extractSMEMask(rewriter, loc, mask, smeTile);
       auto smeRead = rewriter.create<vector::TransferReadOp>(
-          loc, smeTileType, readOp.getSource(),
+          loc, smeTileType, readOp.getBase(),
           getSMESubTileIndices(rewriter, loc, readOp.getIndices(), smeTile),
           readOp.getPermutationMapAttr(), readOp.getPadding(), smeMask,
           readOp.getInBoundsAttr());
@@ -359,7 +359,7 @@ struct LegalizeTransferWriteOpsByDecomposition
     auto smeTileType = getSMETileTypeForElement(vectorType.getElementType());
     auto inputSMETiles = adaptor.getValueToStore();
 
-    Value destTensorOrMemref = writeOp.getSource();
+    Value destTensorOrMemref = writeOp.getBase();
     for (auto [index, smeTile] : llvm::enumerate(decomposeToSMETiles(
              rewriter, vectorType, smeTileType, transposed))) {
       auto smeMask = extractSMEMask(rewriter, loc, mask, smeTile);
@@ -497,7 +497,7 @@ struct LegalizeMultiTileTransferWriteAsStoreLoop
       auto slice =
           rewriter.create<vector::ExtractOp>(loc, tile, tileSliceIndex);
       rewriter.create<vector::TransferWriteOp>(
-          loc, slice, writeOp.getSource(), ValueRange{storeRow, storeCol},
+          loc, slice, writeOp.getBase(), ValueRange{storeRow, storeCol},
           AffineMapAttr::get(writeOp.getPermutationMap().dropResult(0)),
           sliceMask,
           rewriter.getBoolArrayAttr(
@@ -677,7 +677,7 @@ struct LiftIllegalVectorTransposeToMemory
         });
     SmallVector<Value> strides(readType.getRank(), Value(one));
     auto readSubview = rewriter.create<memref::SubViewOp>(
-        loc, illegalRead.getSource(), illegalRead.getIndices(), readSizes,
+        loc, illegalRead.getBase(), illegalRead.getIndices(), readSizes,
         strides);
 
     // Apply the transpose to all values/attributes of the transfer_read:
@@ -851,7 +851,7 @@ struct LowerIllegalTransposeStoreViaZA
 
     // Note: We need to use `get_tile` as ther...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Apr 29, 2025

@llvm/pr-subscribers-mlir-linalg

Author: Andrzej Warzyński (banach-space)

Changes

This change standardizes the naming convention for the argument
representing the value to read from or write to in Vector ops that
interface with Tensors or MemRefs. Specifically, it ensures that all
such ops use the name base (i.e., the base address or location to
which offsets are applied).

Updated operations:

  • vector.transfer_read
  • vector.transfer_write

For reference, these ops already use base:

  • vector.load, vector.store, vector.scatter, vector.gather,
    vector.expandload, vector.compressstore, vector.maskedstore,
    vector.maskedload

This is a non-functional change (NFC) and does not alter the semantics
of these operations.

Implements #131602


Patch is 64.10 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137859.diff

29 Files Affected:

  • (modified) mlir/include/mlir/Dialect/Vector/IR/VectorOps.td (+7-7)
  • (modified) mlir/include/mlir/Interfaces/VectorInterfaces.td (+2-3)
  • (modified) mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp (+6-6)
  • (modified) mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp (+8-8)
  • (modified) mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp (+10-10)
  • (modified) mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp (+2-2)
  • (modified) mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp (+2-2)
  • (modified) mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+3-3)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExtractAddressComputations.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/FoldMemRefAliasOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/TransformOps/NVGPUTransformOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/Transforms/Utils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Tensor/Transforms/FoldTensorSubsetOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+16-16)
  • (modified) mlir/lib/Dialect/Vector/Transforms/BufferizableOpInterfaceImpl.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorMask.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp (+8-8)
  • (modified) mlir/lib/Dialect/Vector/Transforms/SubsetOpInterfaceImpl.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorEmulateNarrowType.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp (+19-19)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransferSplitRewritePatterns.cpp (+10-10)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransforms.cpp (+7-7)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorUnroll.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
index d7518943229ea..137b18d4bfa75 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
@@ -1273,7 +1273,7 @@ def Vector_TransferReadOp :
       AttrSizedOperandSegments,
       DestinationStyleOpInterface
     ]>,
-    Arguments<(ins AnyShaped:$source,
+    Arguments<(ins AnyShaped:$base,
                    Variadic<Index>:$indices,
                    AffineMapAttr:$permutation_map,
                    AnyType:$padding,
@@ -1470,26 +1470,26 @@ def Vector_TransferReadOp :
   let builders = [
     /// 1. Builder that sets padding to zero and an empty mask (variant with attrs).
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "AffineMapAttr":$permutationMapAttr,
                    "ArrayAttr":$inBoundsAttr)>,
     /// 2. Builder that sets padding to zero and an empty mask (variant without attrs).
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "AffineMap":$permutationMap,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
     /// 3. Builder that sets permutation map to 'getMinorIdentityMap'.
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "Value":$padding,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
     /// 4. Builder that sets padding to zero and permutation map to
     /// 'getMinorIdentityMap'.
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
   ];
@@ -1522,7 +1522,7 @@ def Vector_TransferWriteOp :
       DestinationStyleOpInterface
   ]>,
     Arguments<(ins AnyVectorOfAnyRank:$valueToStore,
-                   AnyShaped:$source,
+                   AnyShaped:$base,
                    Variadic<Index>:$indices,
                    AffineMapAttr:$permutation_map,
                    Optional<VectorOfNonZeroRankOf<[I1]>>:$mask,
@@ -1663,7 +1663,7 @@ def Vector_TransferWriteOp :
     ///  ops of other dialects.
     Value getValue() { return getVector(); }
 
-    MutableOperandRange getDpsInitsMutable() { return getSourceMutable(); }
+    MutableOperandRange getDpsInitsMutable() { return getBaseMutable(); }
   }];
 
   let hasFolder = 1;
diff --git a/mlir/include/mlir/Interfaces/VectorInterfaces.td b/mlir/include/mlir/Interfaces/VectorInterfaces.td
index 8ea9d925b3790..7092359127852 100644
--- a/mlir/include/mlir/Interfaces/VectorInterfaces.td
+++ b/mlir/include/mlir/Interfaces/VectorInterfaces.td
@@ -108,10 +108,9 @@ def VectorTransferOpInterface : OpInterface<"VectorTransferOpInterface"> {
         on. In case of a "read" operation, that's the source from which the
         operation reads. In case of a "write" operation, that's the destination
         into which the operation writes.
-        TODO: Change name of operand, which is not accurate for xfer_write.
       }],
       /*retTy=*/"::mlir::Value",
-      /*methodName=*/"getSource",
+      /*methodName=*/"getBase",
       /*args=*/(ins)
     >,
     InterfaceMethod<
@@ -203,7 +202,7 @@ def VectorTransferOpInterface : OpInterface<"VectorTransferOpInterface"> {
 
     /// Return the shaped type of the "source" operand value.
     ::mlir::ShapedType getShapedType() {
-      return ::llvm::cast<::mlir::ShapedType>($_op.getSource().getType());
+      return ::llvm::cast<::mlir::ShapedType>($_op.getBase().getType());
     }
 
     /// Return the number of dimensions that participate in the permutation map.
diff --git a/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp b/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
index 58b85bc0ea6ac..d6f9495b2567c 100644
--- a/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
+++ b/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
@@ -58,7 +58,7 @@ struct TransferReadToArmSMELowering
       return rewriter.notifyMatchFailure(transferReadOp,
                                          "not a valid vector type for SME");
 
-    if (!llvm::isa<MemRefType>(transferReadOp.getSource().getType()))
+    if (!llvm::isa<MemRefType>(transferReadOp.getBase().getType()))
       return rewriter.notifyMatchFailure(transferReadOp, "not a memref source");
 
     // Out-of-bounds dims are not supported.
@@ -84,7 +84,7 @@ struct TransferReadToArmSMELowering
     auto mask = transferReadOp.getMask();
     auto padding = mask ? transferReadOp.getPadding() : nullptr;
     rewriter.replaceOpWithNewOp<arm_sme::TileLoadOp>(
-        transferReadOp, vectorType, transferReadOp.getSource(),
+        transferReadOp, vectorType, transferReadOp.getBase(),
         transferReadOp.getIndices(), padding, mask, layout);
 
     return success();
@@ -128,7 +128,7 @@ struct TransferWriteToArmSMELowering
     if (!arm_sme::isValidSMETileVectorType(vType))
       return failure();
 
-    if (!llvm::isa<MemRefType>(writeOp.getSource().getType()))
+    if (!llvm::isa<MemRefType>(writeOp.getBase().getType()))
       return failure();
 
     // Out-of-bounds dims are not supported.
@@ -149,7 +149,7 @@ struct TransferWriteToArmSMELowering
                    : arm_sme::TileSliceLayout::Horizontal;
 
     rewriter.replaceOpWithNewOp<arm_sme::TileStoreOp>(
-        writeOp, writeOp.getVector(), writeOp.getSource(), writeOp.getIndices(),
+        writeOp, writeOp.getVector(), writeOp.getBase(), writeOp.getIndices(),
         writeOp.getMask(), layout);
     return success();
   }
@@ -686,7 +686,7 @@ struct FoldTransferWriteOfExtractTileSlice
 
   LogicalResult matchAndRewrite(vector::TransferWriteOp writeOp,
                                 PatternRewriter &rewriter) const final {
-    if (!isa<MemRefType>(writeOp.getSource().getType()))
+    if (!isa<MemRefType>(writeOp.getBase().getType()))
       return rewriter.notifyMatchFailure(writeOp, "destination not a memref");
 
     if (writeOp.hasOutOfBoundsDim())
@@ -713,7 +713,7 @@ struct FoldTransferWriteOfExtractTileSlice
 
     rewriter.replaceOpWithNewOp<arm_sme::StoreTileSliceOp>(
         writeOp, extractTileSlice.getTile(),
-        extractTileSlice.getTileSliceIndex(), mask, writeOp.getSource(),
+        extractTileSlice.getTileSliceIndex(), mask, writeOp.getBase(),
         writeOp.getIndices(), extractTileSlice.getLayout());
     return success();
   }
diff --git a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
index ba05a5a000cb9..0b9ebdc0d66bb 100644
--- a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
+++ b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
@@ -486,7 +486,7 @@ struct CombineTransferReadOpTranspose final
     Value result =
         rewriter
             .create<vector::TransferReadOp>(
-                loc, resultType, transferReadOp.getSource(),
+                loc, resultType, transferReadOp.getBase(),
                 transferReadOp.getIndices(), AffineMapAttr::get(newMap),
                 transferReadOp.getPadding(), transferReadOp.getMask(),
                 transferReadOp.getInBoundsAttr())
@@ -581,7 +581,7 @@ convertTransferReadOp(RewriterBase &rewriter, vector::TransferReadOp op,
   gpu::MMAMatrixType type =
       gpu::MMAMatrixType::get(op.getVectorType().getShape(), elType, fragType);
   Value load = rewriter.create<gpu::SubgroupMmaLoadMatrixOp>(
-      op.getLoc(), type, op.getSource(), op.getIndices(),
+      op.getLoc(), type, op.getBase(), op.getIndices(),
       rewriter.getIndexAttr(*stride),
       isTranspose ? rewriter.getUnitAttr() : UnitAttr());
   valueMapping[mappingResult] = load;
@@ -612,7 +612,7 @@ convertTransferWriteOp(RewriterBase &rewriter, vector::TransferWriteOp op,
 
   Value matrix = it->second;
   auto store = rewriter.create<gpu::SubgroupMmaStoreMatrixOp>(
-      op.getLoc(), matrix, op.getSource(), op.getIndices(),
+      op.getLoc(), matrix, op.getBase(), op.getIndices(),
       rewriter.getIndexAttr(*stride), /*transpose=*/UnitAttr());
   (void)store;
 
@@ -759,7 +759,7 @@ creatLdMatrixCompatibleLoads(RewriterBase &rewriter, vector::TransferReadOp op,
                                          indices);
 
   nvgpu::LdMatrixOp newOp = rewriter.create<nvgpu::LdMatrixOp>(
-      loc, vectorType, op.getSource(), indices, *transpose, params->numTiles);
+      loc, vectorType, op.getBase(), indices, *transpose, params->numTiles);
   valueMapping[op] = newOp->getResult(0);
   return success();
 }
@@ -819,7 +819,7 @@ createNonLdMatrixLoads(RewriterBase &rewriter, vector::TransferReadOp op,
           rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
 
       Value el = rewriter.create<vector::LoadOp>(loc, loadedElType,
-                                                 op.getSource(), newIndices);
+                                                 op.getBase(), newIndices);
       result = rewriter.create<vector::InsertOp>(loc, el, result, i);
     }
   } else {
@@ -842,7 +842,7 @@ createNonLdMatrixLoads(RewriterBase &rewriter, vector::TransferReadOp op,
         getXferIndices<vector::TransferReadOp>(
             rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
         Value el = rewriter.create<memref::LoadOp>(op.getLoc(), loadedElType,
-                                                   op.getSource(), newIndices);
+                                                   op.getBase(), newIndices);
         result = rewriter.create<vector::InsertOp>(
             op.getLoc(), el, result, ArrayRef<int64_t>{i, innerIdx});
       }
@@ -876,7 +876,7 @@ convertTransferReadToLoads(RewriterBase &rewriter, vector::TransferReadOp op,
     return rewriter.notifyMatchFailure(op, "no warpMatrixInfo");
 
   bool isLdMatrixCompatible =
-      isSharedMemory(cast<MemRefType>(op.getSource().getType())) &&
+      isSharedMemory(cast<MemRefType>(op.getBase().getType())) &&
       nvgpu::inferTileWidthInBits(*warpMatrixInfo) == 128;
 
   VectorType vecTy = op.getVectorType();
@@ -934,7 +934,7 @@ convertTransferWriteToStores(RewriterBase &rewriter, vector::TransferWriteOp op,
     SmallVector<Value, 4> newIndices;
     getXferIndices<vector::TransferWriteOp>(
         rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
-    rewriter.create<vector::StoreOp>(loc, el, op.getSource(), newIndices);
+    rewriter.create<vector::StoreOp>(loc, el, op.getBase(), newIndices);
   }
 
   LLVM_DEBUG(DBGS() << "erase: " << op << "\n");
diff --git a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
index b9b598c02b4a2..b55fe306d9829 100644
--- a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
+++ b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
@@ -199,7 +199,7 @@ static Value generateInBoundsCheck(
   ImplicitLocOpBuilder lb(xferOp.getLoc(), b);
   if (!xferOp.isDimInBounds(0) && !isBroadcast) {
     Value memrefDim =
-        vector::createOrFoldDimOp(b, loc, xferOp.getSource(), *dim);
+        vector::createOrFoldDimOp(b, loc, xferOp.getBase(), *dim);
     AffineExpr d0, d1;
     bindDims(xferOp.getContext(), d0, d1);
     Value base = xferOp.getIndices()[*dim];
@@ -426,7 +426,7 @@ struct Strategy<TransferReadOp> {
     auto vecType = dyn_cast<VectorType>(bufferType.getElementType());
     auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
     auto newXferOp = b.create<vector::TransferReadOp>(
-        loc, vecType, xferOp.getSource(), xferIndices,
+        loc, vecType, xferOp.getBase(), xferIndices,
         AffineMapAttr::get(unpackedPermutationMap(b, xferOp)),
         xferOp.getPadding(), Value(), inBoundsAttr);
 
@@ -512,7 +512,7 @@ struct Strategy<TransferWriteOp> {
     Location loc = xferOp.getLoc();
     auto vec = b.create<memref::LoadOp>(loc, buffer, loadIndices);
     auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
-    auto source = loopState.empty() ? xferOp.getSource() : loopState[0];
+    auto source = loopState.empty() ? xferOp.getBase() : loopState[0];
     Type type = isTensorOp(xferOp) ? xferOp.getShapedType() : Type();
     auto newXferOp = b.create<vector::TransferWriteOp>(
         loc, type, vec, source, xferIndices,
@@ -544,7 +544,7 @@ struct Strategy<TransferWriteOp> {
 
   /// Return the initial loop state for the generated scf.for loop.
   static Value initialLoopState(TransferWriteOp xferOp) {
-    return isTensorOp(xferOp) ? xferOp.getSource() : Value();
+    return isTensorOp(xferOp) ? xferOp.getBase() : Value();
   }
 };
 
@@ -1145,7 +1145,7 @@ struct ScalableTransposeTransferWriteConversion
           ArrayRef<OpFoldResult>(*maskDims).drop_front());
     }
 
-    Value initDest = isTensorOp(writeOp) ? writeOp.getSource() : Value{};
+    Value initDest = isTensorOp(writeOp) ? writeOp.getBase() : Value{};
     ValueRange initLoopArgs = initDest ? initDest : ValueRange{};
     auto result = rewriter.create<scf::ForOp>(
         loc, lb, ub, step, initLoopArgs,
@@ -1165,7 +1165,7 @@ struct ScalableTransposeTransferWriteConversion
 
           // Create the transfer_write for the slice.
           Value dest =
-              loopIterArgs.empty() ? writeOp.getSource() : loopIterArgs.front();
+              loopIterArgs.empty() ? writeOp.getBase() : loopIterArgs.front();
           auto newWriteOp = b.create<vector::TransferWriteOp>(
               loc, sliceVec, dest, xferIndices,
               ArrayRef<bool>(writeOp.getInBoundsValues()).drop_front());
@@ -1340,7 +1340,7 @@ struct UnrollTransferReadConversion
 
             auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
             auto newXferOp = b.create<vector::TransferReadOp>(
-                loc, newXferVecType, xferOp.getSource(), xferIndices,
+                loc, newXferVecType, xferOp.getBase(), xferIndices,
                 AffineMapAttr::get(unpackedPermutationMap(b, xferOp)),
                 xferOp.getPadding(), Value(), inBoundsAttr);
             maybeAssignMask(b, xferOp, newXferOp, i);
@@ -1449,7 +1449,7 @@ struct UnrollTransferWriteConversion
     }
 
     int64_t dimSize = inputVectorTy.getShape()[0];
-    Value source = xferOp.getSource(); // memref or tensor to be written to.
+    Value source = xferOp.getBase(); // memref or tensor to be written to.
     auto sourceType = isTensorOp(xferOp) ? xferOp.getShapedType() : Type();
 
     // Generate fully unrolled loop of transfer ops.
@@ -1568,7 +1568,7 @@ struct Strategy1d<TransferReadOp> {
         /*inBoundsCase=*/
         [&](OpBuilder &b, Location loc) {
           Value val =
-              b.create<memref::LoadOp>(loc, xferOp.getSource(), indices);
+              b.create<memref::LoadOp>(loc, xferOp.getBase(), indices);
           return b.create<vector::InsertElementOp>(loc, val, vec, iv);
         },
         /*outOfBoundsCase=*/
@@ -1599,7 +1599,7 @@ struct Strategy1d<TransferWriteOp> {
         /*inBoundsCase=*/[&](OpBuilder &b, Location loc) {
           auto val =
               b.create<vector::ExtractElementOp>(loc, xferOp.getVector(), iv);
-          b.create<memref::StoreOp>(loc, val, xferOp.getSource(), indices);
+          b.create<memref::StoreOp>(loc, val, xferOp.getBase(), indices);
         });
     b.create<scf::YieldOp>(loc);
   }
diff --git a/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp b/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
index 0bc0f2fca2c3b..30145ea322e79 100644
--- a/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
+++ b/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
@@ -192,7 +192,7 @@ struct TransferReadLowering : public OpRewritePattern<vector::TransferReadOp> {
 
     xegpu::CreateNdDescOp ndDesc =
         createNdDescriptor(rewriter, loc, descType,
-                           dyn_cast<TypedValue<MemRefType>>(readOp.getSource()),
+                           dyn_cast<TypedValue<MemRefType>>(readOp.getBase()),
                            readOp.getIndices());
 
     DenseI64ArrayAttr transposeAttr =
@@ -233,7 +233,7 @@ struct TransferWriteLowering
         xegpu::MemorySpace::Global);
     xegpu::CreateNdDescOp ndDesc = createNdDescriptor(
         rewriter, loc, descType,
-        dyn_cast<TypedValue<MemRefType>>(writeOp.getSource()),
+        dyn_cast<TypedValue<MemRefType>>(writeOp.getBase()),
         writeOp.getIndices());
 
     // By default, no specific caching policy is assigned.
diff --git a/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp b/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
index 9f64abb5a8860..cd41765dec2a2 100644
--- a/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
+++ b/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
@@ -118,7 +118,7 @@ static Value createVectorLoadForMaskedLoad(OpBuilder &builder, Location loc,
   Value fill = builder.create<vector::SplatOp>(loc, unbroadcastedVectorType,
                                                readOp.getPadding());
   Value load = builder.create<vector::LoadOp>(
-      loc, unbroadcastedVectorType, readOp.getSource(), readOp.getIndices());
+      loc, unbroadcastedVectorType, readOp.getBase(), readOp.getIndices());
   Value res = builder.create<arith::SelectOp>(loc, unbroadcastedVectorType,
                                               readOp.getMask(), load, fill);
   // Insert a broadcasting op if required.
@@ -149,7 +149,7 @@ struct TransferReadLowering final : OpRewritePattern<vector::TransferReadOp> {
     }
 
     Location loc = readOp.getLoc();
-    Value src = readOp.getSource();
+    Value src = readOp.getBase();
 
     VectorType vectorType = readOp.getVectorType();
     int64_t vectorSize = vectorType.getNumElements();
diff --git a/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp b/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
index 62a148d2b7e62..95965872f4098 100644
--- a/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
+++ b/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
@@ -315,7 +315,7 @@ struct LegalizeTransferReadOpsByDecomposition
          decomposeToSMETiles(rewriter, vectorType, smeTileType, transposed)) {
       auto smeMask = extractSMEMask(rewriter, loc, mask, smeTile);
       auto smeRead = rewriter.create<vector::TransferReadOp>(
-          loc, smeTileType, readOp.getSource(),
+          loc, smeTileType, readOp.getBase(),
           getSMESubTileIndices(rewriter, loc, readOp.getIndices(), smeTile),
           readOp.getPermutationMapAttr(), readOp.getPadding(), smeMask,
           readOp.getInBoundsAttr());
@@ -359,7 +359,7 @@ struct LegalizeTransferWriteOpsByDecomposition
     auto smeTileType = getSMETileTypeForElement(vectorType.getElementType());
     auto inputSMETiles = adaptor.getValueToStore();
 
-    Value destTensorOrMemref = writeOp.getSource();
+    Value destTensorOrMemref = writeOp.getBase();
     for (auto [index, smeTile] : llvm::enumerate(decomposeToSMETiles(
              rewriter, vectorType, smeTileType, transposed))) {
       auto smeMask = extractSMEMask(rewriter, loc, mask, smeTile);
@@ -497,7 +497,7 @@ struct LegalizeMultiTileTransferWriteAsStoreLoop
       auto slice =
           rewriter.create<vector::ExtractOp>(loc, tile, tileSliceIndex);
       rewriter.create<vector::TransferWriteOp>(
-          loc, slice, writeOp.getSource(), ValueRange{storeRow, storeCol},
+          loc, slice, writeOp.getBase(), ValueRange{storeRow, storeCol},
           AffineMapAttr::get(writeOp.getPermutationMap().dropResult(0)),
           sliceMask,
           rewriter.getBoolArrayAttr(
@@ -677,7 +677,7 @@ struct LiftIllegalVectorTransposeToMemory
         });
     SmallVector<Value> strides(readType.getRank(), Value(one));
     auto readSubview = rewriter.create<memref::SubViewOp>(
-        loc, illegalRead.getSource(), illegalRead.getIndices(), readSizes,
+        loc, illegalRead.getBase(), illegalRead.getIndices(), readSizes,
         strides);
 
     // Apply the transpose to all values/attributes of the transfer_read:
@@ -851,7 +851,7 @@ struct LowerIllegalTransposeStoreViaZA
 
     // Note: We need to use `get_tile` as ther...
[truncated]

@llvmbot
Copy link
Member

llvmbot commented Apr 29, 2025

@llvm/pr-subscribers-mlir-gpu

Author: Andrzej Warzyński (banach-space)

Changes

This change standardizes the naming convention for the argument
representing the value to read from or write to in Vector ops that
interface with Tensors or MemRefs. Specifically, it ensures that all
such ops use the name base (i.e., the base address or location to
which offsets are applied).

Updated operations:

  • vector.transfer_read
  • vector.transfer_write

For reference, these ops already use base:

  • vector.load, vector.store, vector.scatter, vector.gather,
    vector.expandload, vector.compressstore, vector.maskedstore,
    vector.maskedload

This is a non-functional change (NFC) and does not alter the semantics
of these operations.

Implements #131602


Patch is 64.10 KiB, truncated to 20.00 KiB below, full version: https://github.com/llvm/llvm-project/pull/137859.diff

29 Files Affected:

  • (modified) mlir/include/mlir/Dialect/Vector/IR/VectorOps.td (+7-7)
  • (modified) mlir/include/mlir/Interfaces/VectorInterfaces.td (+2-3)
  • (modified) mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp (+6-6)
  • (modified) mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp (+8-8)
  • (modified) mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp (+10-10)
  • (modified) mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp (+2-2)
  • (modified) mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp (+2-2)
  • (modified) mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Hoisting.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Linalg/Transforms/Vectorization.cpp (+3-3)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/ExtractAddressComputations.cpp (+1-1)
  • (modified) mlir/lib/Dialect/MemRef/Transforms/FoldMemRefAliasOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/TransformOps/NVGPUTransformOps.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/Transforms/Utils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/NVGPU/Utils/MMAUtils.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Tensor/Transforms/FoldTensorSubsetOps.cpp (+1-1)
  • (modified) mlir/lib/Dialect/Vector/IR/VectorOps.cpp (+16-16)
  • (modified) mlir/lib/Dialect/Vector/Transforms/BufferizableOpInterfaceImpl.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorMask.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/LowerVectorTransfer.cpp (+8-8)
  • (modified) mlir/lib/Dialect/Vector/Transforms/SubsetOpInterfaceImpl.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorDistribute.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorDropLeadUnitDim.cpp (+5-5)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorEmulateNarrowType.cpp (+3-3)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransferOpTransforms.cpp (+19-19)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransferSplitRewritePatterns.cpp (+10-10)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorTransforms.cpp (+7-7)
  • (modified) mlir/lib/Dialect/Vector/Transforms/VectorUnroll.cpp (+2-2)
  • (modified) mlir/lib/Dialect/Vector/Utils/VectorUtils.cpp (+1-1)
diff --git a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
index d7518943229ea..137b18d4bfa75 100644
--- a/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
+++ b/mlir/include/mlir/Dialect/Vector/IR/VectorOps.td
@@ -1273,7 +1273,7 @@ def Vector_TransferReadOp :
       AttrSizedOperandSegments,
       DestinationStyleOpInterface
     ]>,
-    Arguments<(ins AnyShaped:$source,
+    Arguments<(ins AnyShaped:$base,
                    Variadic<Index>:$indices,
                    AffineMapAttr:$permutation_map,
                    AnyType:$padding,
@@ -1470,26 +1470,26 @@ def Vector_TransferReadOp :
   let builders = [
     /// 1. Builder that sets padding to zero and an empty mask (variant with attrs).
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "AffineMapAttr":$permutationMapAttr,
                    "ArrayAttr":$inBoundsAttr)>,
     /// 2. Builder that sets padding to zero and an empty mask (variant without attrs).
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "AffineMap":$permutationMap,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
     /// 3. Builder that sets permutation map to 'getMinorIdentityMap'.
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    "Value":$padding,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
     /// 4. Builder that sets padding to zero and permutation map to
     /// 'getMinorIdentityMap'.
     OpBuilder<(ins "VectorType":$vectorType,
-                   "Value":$source,
+                   "Value":$base,
                    "ValueRange":$indices,
                    CArg<"std::optional<ArrayRef<bool>>", "::std::nullopt">:$inBounds)>,
   ];
@@ -1522,7 +1522,7 @@ def Vector_TransferWriteOp :
       DestinationStyleOpInterface
   ]>,
     Arguments<(ins AnyVectorOfAnyRank:$valueToStore,
-                   AnyShaped:$source,
+                   AnyShaped:$base,
                    Variadic<Index>:$indices,
                    AffineMapAttr:$permutation_map,
                    Optional<VectorOfNonZeroRankOf<[I1]>>:$mask,
@@ -1663,7 +1663,7 @@ def Vector_TransferWriteOp :
     ///  ops of other dialects.
     Value getValue() { return getVector(); }
 
-    MutableOperandRange getDpsInitsMutable() { return getSourceMutable(); }
+    MutableOperandRange getDpsInitsMutable() { return getBaseMutable(); }
   }];
 
   let hasFolder = 1;
diff --git a/mlir/include/mlir/Interfaces/VectorInterfaces.td b/mlir/include/mlir/Interfaces/VectorInterfaces.td
index 8ea9d925b3790..7092359127852 100644
--- a/mlir/include/mlir/Interfaces/VectorInterfaces.td
+++ b/mlir/include/mlir/Interfaces/VectorInterfaces.td
@@ -108,10 +108,9 @@ def VectorTransferOpInterface : OpInterface<"VectorTransferOpInterface"> {
         on. In case of a "read" operation, that's the source from which the
         operation reads. In case of a "write" operation, that's the destination
         into which the operation writes.
-        TODO: Change name of operand, which is not accurate for xfer_write.
       }],
       /*retTy=*/"::mlir::Value",
-      /*methodName=*/"getSource",
+      /*methodName=*/"getBase",
       /*args=*/(ins)
     >,
     InterfaceMethod<
@@ -203,7 +202,7 @@ def VectorTransferOpInterface : OpInterface<"VectorTransferOpInterface"> {
 
     /// Return the shaped type of the "source" operand value.
     ::mlir::ShapedType getShapedType() {
-      return ::llvm::cast<::mlir::ShapedType>($_op.getSource().getType());
+      return ::llvm::cast<::mlir::ShapedType>($_op.getBase().getType());
     }
 
     /// Return the number of dimensions that participate in the permutation map.
diff --git a/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp b/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
index 58b85bc0ea6ac..d6f9495b2567c 100644
--- a/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
+++ b/mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp
@@ -58,7 +58,7 @@ struct TransferReadToArmSMELowering
       return rewriter.notifyMatchFailure(transferReadOp,
                                          "not a valid vector type for SME");
 
-    if (!llvm::isa<MemRefType>(transferReadOp.getSource().getType()))
+    if (!llvm::isa<MemRefType>(transferReadOp.getBase().getType()))
       return rewriter.notifyMatchFailure(transferReadOp, "not a memref source");
 
     // Out-of-bounds dims are not supported.
@@ -84,7 +84,7 @@ struct TransferReadToArmSMELowering
     auto mask = transferReadOp.getMask();
     auto padding = mask ? transferReadOp.getPadding() : nullptr;
     rewriter.replaceOpWithNewOp<arm_sme::TileLoadOp>(
-        transferReadOp, vectorType, transferReadOp.getSource(),
+        transferReadOp, vectorType, transferReadOp.getBase(),
         transferReadOp.getIndices(), padding, mask, layout);
 
     return success();
@@ -128,7 +128,7 @@ struct TransferWriteToArmSMELowering
     if (!arm_sme::isValidSMETileVectorType(vType))
       return failure();
 
-    if (!llvm::isa<MemRefType>(writeOp.getSource().getType()))
+    if (!llvm::isa<MemRefType>(writeOp.getBase().getType()))
       return failure();
 
     // Out-of-bounds dims are not supported.
@@ -149,7 +149,7 @@ struct TransferWriteToArmSMELowering
                    : arm_sme::TileSliceLayout::Horizontal;
 
     rewriter.replaceOpWithNewOp<arm_sme::TileStoreOp>(
-        writeOp, writeOp.getVector(), writeOp.getSource(), writeOp.getIndices(),
+        writeOp, writeOp.getVector(), writeOp.getBase(), writeOp.getIndices(),
         writeOp.getMask(), layout);
     return success();
   }
@@ -686,7 +686,7 @@ struct FoldTransferWriteOfExtractTileSlice
 
   LogicalResult matchAndRewrite(vector::TransferWriteOp writeOp,
                                 PatternRewriter &rewriter) const final {
-    if (!isa<MemRefType>(writeOp.getSource().getType()))
+    if (!isa<MemRefType>(writeOp.getBase().getType()))
       return rewriter.notifyMatchFailure(writeOp, "destination not a memref");
 
     if (writeOp.hasOutOfBoundsDim())
@@ -713,7 +713,7 @@ struct FoldTransferWriteOfExtractTileSlice
 
     rewriter.replaceOpWithNewOp<arm_sme::StoreTileSliceOp>(
         writeOp, extractTileSlice.getTile(),
-        extractTileSlice.getTileSliceIndex(), mask, writeOp.getSource(),
+        extractTileSlice.getTileSliceIndex(), mask, writeOp.getBase(),
         writeOp.getIndices(), extractTileSlice.getLayout());
     return success();
   }
diff --git a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
index ba05a5a000cb9..0b9ebdc0d66bb 100644
--- a/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
+++ b/mlir/lib/Conversion/VectorToGPU/VectorToGPU.cpp
@@ -486,7 +486,7 @@ struct CombineTransferReadOpTranspose final
     Value result =
         rewriter
             .create<vector::TransferReadOp>(
-                loc, resultType, transferReadOp.getSource(),
+                loc, resultType, transferReadOp.getBase(),
                 transferReadOp.getIndices(), AffineMapAttr::get(newMap),
                 transferReadOp.getPadding(), transferReadOp.getMask(),
                 transferReadOp.getInBoundsAttr())
@@ -581,7 +581,7 @@ convertTransferReadOp(RewriterBase &rewriter, vector::TransferReadOp op,
   gpu::MMAMatrixType type =
       gpu::MMAMatrixType::get(op.getVectorType().getShape(), elType, fragType);
   Value load = rewriter.create<gpu::SubgroupMmaLoadMatrixOp>(
-      op.getLoc(), type, op.getSource(), op.getIndices(),
+      op.getLoc(), type, op.getBase(), op.getIndices(),
       rewriter.getIndexAttr(*stride),
       isTranspose ? rewriter.getUnitAttr() : UnitAttr());
   valueMapping[mappingResult] = load;
@@ -612,7 +612,7 @@ convertTransferWriteOp(RewriterBase &rewriter, vector::TransferWriteOp op,
 
   Value matrix = it->second;
   auto store = rewriter.create<gpu::SubgroupMmaStoreMatrixOp>(
-      op.getLoc(), matrix, op.getSource(), op.getIndices(),
+      op.getLoc(), matrix, op.getBase(), op.getIndices(),
       rewriter.getIndexAttr(*stride), /*transpose=*/UnitAttr());
   (void)store;
 
@@ -759,7 +759,7 @@ creatLdMatrixCompatibleLoads(RewriterBase &rewriter, vector::TransferReadOp op,
                                          indices);
 
   nvgpu::LdMatrixOp newOp = rewriter.create<nvgpu::LdMatrixOp>(
-      loc, vectorType, op.getSource(), indices, *transpose, params->numTiles);
+      loc, vectorType, op.getBase(), indices, *transpose, params->numTiles);
   valueMapping[op] = newOp->getResult(0);
   return success();
 }
@@ -819,7 +819,7 @@ createNonLdMatrixLoads(RewriterBase &rewriter, vector::TransferReadOp op,
           rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
 
       Value el = rewriter.create<vector::LoadOp>(loc, loadedElType,
-                                                 op.getSource(), newIndices);
+                                                 op.getBase(), newIndices);
       result = rewriter.create<vector::InsertOp>(loc, el, result, i);
     }
   } else {
@@ -842,7 +842,7 @@ createNonLdMatrixLoads(RewriterBase &rewriter, vector::TransferReadOp op,
         getXferIndices<vector::TransferReadOp>(
             rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
         Value el = rewriter.create<memref::LoadOp>(op.getLoc(), loadedElType,
-                                                   op.getSource(), newIndices);
+                                                   op.getBase(), newIndices);
         result = rewriter.create<vector::InsertOp>(
             op.getLoc(), el, result, ArrayRef<int64_t>{i, innerIdx});
       }
@@ -876,7 +876,7 @@ convertTransferReadToLoads(RewriterBase &rewriter, vector::TransferReadOp op,
     return rewriter.notifyMatchFailure(op, "no warpMatrixInfo");
 
   bool isLdMatrixCompatible =
-      isSharedMemory(cast<MemRefType>(op.getSource().getType())) &&
+      isSharedMemory(cast<MemRefType>(op.getBase().getType())) &&
       nvgpu::inferTileWidthInBits(*warpMatrixInfo) == 128;
 
   VectorType vecTy = op.getVectorType();
@@ -934,7 +934,7 @@ convertTransferWriteToStores(RewriterBase &rewriter, vector::TransferWriteOp op,
     SmallVector<Value, 4> newIndices;
     getXferIndices<vector::TransferWriteOp>(
         rewriter, op, *coords, {laneId, logicalValueId}, newIndices);
-    rewriter.create<vector::StoreOp>(loc, el, op.getSource(), newIndices);
+    rewriter.create<vector::StoreOp>(loc, el, op.getBase(), newIndices);
   }
 
   LLVM_DEBUG(DBGS() << "erase: " << op << "\n");
diff --git a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
index b9b598c02b4a2..b55fe306d9829 100644
--- a/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
+++ b/mlir/lib/Conversion/VectorToSCF/VectorToSCF.cpp
@@ -199,7 +199,7 @@ static Value generateInBoundsCheck(
   ImplicitLocOpBuilder lb(xferOp.getLoc(), b);
   if (!xferOp.isDimInBounds(0) && !isBroadcast) {
     Value memrefDim =
-        vector::createOrFoldDimOp(b, loc, xferOp.getSource(), *dim);
+        vector::createOrFoldDimOp(b, loc, xferOp.getBase(), *dim);
     AffineExpr d0, d1;
     bindDims(xferOp.getContext(), d0, d1);
     Value base = xferOp.getIndices()[*dim];
@@ -426,7 +426,7 @@ struct Strategy<TransferReadOp> {
     auto vecType = dyn_cast<VectorType>(bufferType.getElementType());
     auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
     auto newXferOp = b.create<vector::TransferReadOp>(
-        loc, vecType, xferOp.getSource(), xferIndices,
+        loc, vecType, xferOp.getBase(), xferIndices,
         AffineMapAttr::get(unpackedPermutationMap(b, xferOp)),
         xferOp.getPadding(), Value(), inBoundsAttr);
 
@@ -512,7 +512,7 @@ struct Strategy<TransferWriteOp> {
     Location loc = xferOp.getLoc();
     auto vec = b.create<memref::LoadOp>(loc, buffer, loadIndices);
     auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
-    auto source = loopState.empty() ? xferOp.getSource() : loopState[0];
+    auto source = loopState.empty() ? xferOp.getBase() : loopState[0];
     Type type = isTensorOp(xferOp) ? xferOp.getShapedType() : Type();
     auto newXferOp = b.create<vector::TransferWriteOp>(
         loc, type, vec, source, xferIndices,
@@ -544,7 +544,7 @@ struct Strategy<TransferWriteOp> {
 
   /// Return the initial loop state for the generated scf.for loop.
   static Value initialLoopState(TransferWriteOp xferOp) {
-    return isTensorOp(xferOp) ? xferOp.getSource() : Value();
+    return isTensorOp(xferOp) ? xferOp.getBase() : Value();
   }
 };
 
@@ -1145,7 +1145,7 @@ struct ScalableTransposeTransferWriteConversion
           ArrayRef<OpFoldResult>(*maskDims).drop_front());
     }
 
-    Value initDest = isTensorOp(writeOp) ? writeOp.getSource() : Value{};
+    Value initDest = isTensorOp(writeOp) ? writeOp.getBase() : Value{};
     ValueRange initLoopArgs = initDest ? initDest : ValueRange{};
     auto result = rewriter.create<scf::ForOp>(
         loc, lb, ub, step, initLoopArgs,
@@ -1165,7 +1165,7 @@ struct ScalableTransposeTransferWriteConversion
 
           // Create the transfer_write for the slice.
           Value dest =
-              loopIterArgs.empty() ? writeOp.getSource() : loopIterArgs.front();
+              loopIterArgs.empty() ? writeOp.getBase() : loopIterArgs.front();
           auto newWriteOp = b.create<vector::TransferWriteOp>(
               loc, sliceVec, dest, xferIndices,
               ArrayRef<bool>(writeOp.getInBoundsValues()).drop_front());
@@ -1340,7 +1340,7 @@ struct UnrollTransferReadConversion
 
             auto inBoundsAttr = dropFirstElem(b, xferOp.getInBoundsAttr());
             auto newXferOp = b.create<vector::TransferReadOp>(
-                loc, newXferVecType, xferOp.getSource(), xferIndices,
+                loc, newXferVecType, xferOp.getBase(), xferIndices,
                 AffineMapAttr::get(unpackedPermutationMap(b, xferOp)),
                 xferOp.getPadding(), Value(), inBoundsAttr);
             maybeAssignMask(b, xferOp, newXferOp, i);
@@ -1449,7 +1449,7 @@ struct UnrollTransferWriteConversion
     }
 
     int64_t dimSize = inputVectorTy.getShape()[0];
-    Value source = xferOp.getSource(); // memref or tensor to be written to.
+    Value source = xferOp.getBase(); // memref or tensor to be written to.
     auto sourceType = isTensorOp(xferOp) ? xferOp.getShapedType() : Type();
 
     // Generate fully unrolled loop of transfer ops.
@@ -1568,7 +1568,7 @@ struct Strategy1d<TransferReadOp> {
         /*inBoundsCase=*/
         [&](OpBuilder &b, Location loc) {
           Value val =
-              b.create<memref::LoadOp>(loc, xferOp.getSource(), indices);
+              b.create<memref::LoadOp>(loc, xferOp.getBase(), indices);
           return b.create<vector::InsertElementOp>(loc, val, vec, iv);
         },
         /*outOfBoundsCase=*/
@@ -1599,7 +1599,7 @@ struct Strategy1d<TransferWriteOp> {
         /*inBoundsCase=*/[&](OpBuilder &b, Location loc) {
           auto val =
               b.create<vector::ExtractElementOp>(loc, xferOp.getVector(), iv);
-          b.create<memref::StoreOp>(loc, val, xferOp.getSource(), indices);
+          b.create<memref::StoreOp>(loc, val, xferOp.getBase(), indices);
         });
     b.create<scf::YieldOp>(loc);
   }
diff --git a/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp b/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
index 0bc0f2fca2c3b..30145ea322e79 100644
--- a/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
+++ b/mlir/lib/Conversion/VectorToXeGPU/VectorToXeGPU.cpp
@@ -192,7 +192,7 @@ struct TransferReadLowering : public OpRewritePattern<vector::TransferReadOp> {
 
     xegpu::CreateNdDescOp ndDesc =
         createNdDescriptor(rewriter, loc, descType,
-                           dyn_cast<TypedValue<MemRefType>>(readOp.getSource()),
+                           dyn_cast<TypedValue<MemRefType>>(readOp.getBase()),
                            readOp.getIndices());
 
     DenseI64ArrayAttr transposeAttr =
@@ -233,7 +233,7 @@ struct TransferWriteLowering
         xegpu::MemorySpace::Global);
     xegpu::CreateNdDescOp ndDesc = createNdDescriptor(
         rewriter, loc, descType,
-        dyn_cast<TypedValue<MemRefType>>(writeOp.getSource()),
+        dyn_cast<TypedValue<MemRefType>>(writeOp.getBase()),
         writeOp.getIndices());
 
     // By default, no specific caching policy is assigned.
diff --git a/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp b/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
index 9f64abb5a8860..cd41765dec2a2 100644
--- a/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
+++ b/mlir/lib/Dialect/AMDGPU/Transforms/TransferReadToLoad.cpp
@@ -118,7 +118,7 @@ static Value createVectorLoadForMaskedLoad(OpBuilder &builder, Location loc,
   Value fill = builder.create<vector::SplatOp>(loc, unbroadcastedVectorType,
                                                readOp.getPadding());
   Value load = builder.create<vector::LoadOp>(
-      loc, unbroadcastedVectorType, readOp.getSource(), readOp.getIndices());
+      loc, unbroadcastedVectorType, readOp.getBase(), readOp.getIndices());
   Value res = builder.create<arith::SelectOp>(loc, unbroadcastedVectorType,
                                               readOp.getMask(), load, fill);
   // Insert a broadcasting op if required.
@@ -149,7 +149,7 @@ struct TransferReadLowering final : OpRewritePattern<vector::TransferReadOp> {
     }
 
     Location loc = readOp.getLoc();
-    Value src = readOp.getSource();
+    Value src = readOp.getBase();
 
     VectorType vectorType = readOp.getVectorType();
     int64_t vectorSize = vectorType.getNumElements();
diff --git a/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp b/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
index 62a148d2b7e62..95965872f4098 100644
--- a/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
+++ b/mlir/lib/Dialect/ArmSME/Transforms/VectorLegalization.cpp
@@ -315,7 +315,7 @@ struct LegalizeTransferReadOpsByDecomposition
          decomposeToSMETiles(rewriter, vectorType, smeTileType, transposed)) {
       auto smeMask = extractSMEMask(rewriter, loc, mask, smeTile);
       auto smeRead = rewriter.create<vector::TransferReadOp>(
-          loc, smeTileType, readOp.getSource(),
+          loc, smeTileType, readOp.getBase(),
           getSMESubTileIndices(rewriter, loc, readOp.getIndices(), smeTile),
           readOp.getPermutationMapAttr(), readOp.getPadding(), smeMask,
           readOp.getInBoundsAttr());
@@ -359,7 +359,7 @@ struct LegalizeTransferWriteOpsByDecomposition
     auto smeTileType = getSMETileTypeForElement(vectorType.getElementType());
     auto inputSMETiles = adaptor.getValueToStore();
 
-    Value destTensorOrMemref = writeOp.getSource();
+    Value destTensorOrMemref = writeOp.getBase();
     for (auto [index, smeTile] : llvm::enumerate(decomposeToSMETiles(
              rewriter, vectorType, smeTileType, transposed))) {
       auto smeMask = extractSMEMask(rewriter, loc, mask, smeTile);
@@ -497,7 +497,7 @@ struct LegalizeMultiTileTransferWriteAsStoreLoop
       auto slice =
           rewriter.create<vector::ExtractOp>(loc, tile, tileSliceIndex);
       rewriter.create<vector::TransferWriteOp>(
-          loc, slice, writeOp.getSource(), ValueRange{storeRow, storeCol},
+          loc, slice, writeOp.getBase(), ValueRange{storeRow, storeCol},
           AffineMapAttr::get(writeOp.getPermutationMap().dropResult(0)),
           sliceMask,
           rewriter.getBoolArrayAttr(
@@ -677,7 +677,7 @@ struct LiftIllegalVectorTransposeToMemory
         });
     SmallVector<Value> strides(readType.getRank(), Value(one));
     auto readSubview = rewriter.create<memref::SubViewOp>(
-        loc, illegalRead.getSource(), illegalRead.getIndices(), readSizes,
+        loc, illegalRead.getBase(), illegalRead.getIndices(), readSizes,
         strides);
 
     // Apply the transpose to all values/attributes of the transfer_read:
@@ -851,7 +851,7 @@ struct LowerIllegalTransposeStoreViaZA
 
     // Note: We need to use `get_tile` as ther...
[truncated]

Copy link

github-actions bot commented Apr 29, 2025

✅ With the latest revision this PR passed the C/C++ code formatter.

This change standardizes the naming convention for the argument
representing the value to read from or write to in Vector ops that
interface with Tensors or MemRefs. Specifically, it ensures that all
such ops use the name `base` (i.e., the base address or location to
which offsets are applied).

Updated operations:

* vector.transfer_read
* vector.transfer_write

For reference, these ops already use base:

* vector.load, vector.store, vector.scatter, vector.gather,
  vector.expandload, vector.compressstore, vector.maskedstore,
  vector.maskedload

This is a non-functional change (NFC) and does not alter the semantics
of these operations.

Implements llvm#131602
@banach-space banach-space force-pushed the andrzej/vector/rename_get_source branch from 7fd6e8b to 460adfa Compare April 29, 2025 19:08
Copy link
Contributor

@krzysz00 krzysz00 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, it's an awkward change for downstreams, but it seems like a sensible refactor. +1 for me though we should wait to make sure no one else objects

@banach-space
Copy link
Contributor Author

it's an awkward change for downstreams

Yeah, I know. Let me know if you see anyway to minimise the disruption. For better visibility, I've advertised on Discourse:

@krzysz00
Copy link
Contributor

You could extraClassDeclarations in the old getters and mark them deprecated

@banach-space
Copy link
Contributor Author

You could extraClassDeclarations in the old getters and mark them deprecated

Updated, thanks for the suggestion!

Copy link
Member

@Groverkss Groverkss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

non blocking suggestion: can we use dest for write-like operations? These are destination passing style operations and dest makes it immediatly clear. base is not that clear.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants