[Mlir-commits] [mlir] [mlir][vector] Add patterns for vector masked load/store (PR #74834)

Diego Caballero llvmlistbot at llvm.org
Wed Dec 13 01:59:39 PST 2023


================
@@ -0,0 +1,122 @@
+//===- LowerVectorMaskedLoadStore.cpp - Lower 'vector.maskedload/store' op ===//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file implements target-independent rewrites and utilities to lower the
+// 'vector.maskedload' and 'vector.maskedstore' operation.
+//
+//===----------------------------------------------------------------------===//
+
+#include "mlir/Dialect/MemRef/IR/MemRef.h"
+#include "mlir/Dialect/SCF/IR/SCF.h"
+#include "mlir/Dialect/Vector/Transforms/LoweringPatterns.h"
+
+#define DEBUG_TYPE "vector-masked-load-store-lowering"
+
+using namespace mlir;
+
+namespace {
+
+/// Convert vector.maskedload
+///
+/// Before:
+///
+///   vector.maskedload %base[%idx_0, %idx_1], %mask, %pass_thru
+///
+/// After:
+///
+///   %value = vector.load %base[%idx_0, %idx_1]
+///   arith.select %mask, %value, %pass_thru
+///
+struct VectorMaskedLoadOpConverter : OpRewritePattern<vector::MaskedLoadOp> {
+  using OpRewritePattern::OpRewritePattern;
+
+  LogicalResult matchAndRewrite(vector::MaskedLoadOp maskedLoadOp,
+                                PatternRewriter &rewriter) const override {
+    auto loc = maskedLoadOp.getLoc();
+    auto loadAll = rewriter.create<vector::LoadOp>(loc, maskedLoadOp.getType(),
+                                                   maskedLoadOp.getBase(),
+                                                   maskedLoadOp.getIndices());
+    auto selectedLoad = rewriter.create<arith::SelectOp>(
+        loc, maskedLoadOp.getMask(), loadAll, maskedLoadOp.getPassThru());
----------------
dcaballe wrote:

Yes, we would need something similar to the loop in the store lowering or we will read out of bounds and get a segfault. It's true that if the address is aligned to the vector length boundaries, we can "safely" read out-of-bounds in some architectures (x86) but currently there's no way to know if a vector load is aligned or not.

https://github.com/llvm/llvm-project/pull/74834


More information about the Mlir-commits mailing list