[Mlir-commits] [mlir] [MLIR][XeGPU] refine verifier for TensorDescType (PR #137226)
llvmlistbot at llvm.org
llvmlistbot at llvm.org
Mon Apr 28 12:30:13 PDT 2025
github-actions[bot] wrote:
<!--LLVM CODE FORMAT COMMENT: {clang-format}-->
:warning: C/C++ code formatter, clang-format found issues in your code. :warning:
<details>
<summary>
You can test this locally with the following command:
</summary>
``````````bash
git-clang-format --diff HEAD~1 HEAD --extensions cpp,h -- mlir/include/mlir/Dialect/XeGPU/IR/XeGPU.h mlir/lib/Dialect/XeGPU/IR/XeGPUDialect.cpp mlir/lib/Dialect/XeGPU/IR/XeGPUOps.cpp
``````````
</details>
<details>
<summary>
View the diff from clang-format here.
</summary>
``````````diff
diff --git a/mlir/lib/Dialect/XeGPU/IR/XeGPUDialect.cpp b/mlir/lib/Dialect/XeGPU/IR/XeGPUDialect.cpp
index 35a3eb058..f5fe959a2 100644
--- a/mlir/lib/Dialect/XeGPU/IR/XeGPUDialect.cpp
+++ b/mlir/lib/Dialect/XeGPU/IR/XeGPUDialect.cpp
@@ -37,18 +37,19 @@ bool XeGPUDialect::isEvenlyDistributable(llvm::ArrayRef<int64_t> shape,
xegpu::LayoutAttr attr) {
assert(attr && "Layout attribute is missing.");
- // Checks whether the given shape can be evenly distributed using the specified
- // layout and data attributes. If successful, it returns the work size for each
- // compute unit; otherwise, it returns `std::nullopt`. The work size per compute
- // unit is calculated as follows:
+ // Checks whether the given shape can be evenly distributed using the
+ // specified layout and data attributes. If successful, it returns the work
+ // size for each compute unit; otherwise, it returns `std::nullopt`. The work
+ // size per compute unit is calculated as follows:
// - If `data` is null: newShape[i] = shape[i] / layout[i]
// - If `data` is not null: newShape[i] = data[i]
- // When round-robin distribution (`use_rr`) is enabled, `shape[i]` can be smaller
- // than `layout[i] * data[i]`, allowing multiple compute units to share the data.
- auto tryDistribute =
- [&](llvm::ArrayRef<int64_t> shape, DenseI32ArrayAttr layout,
- DenseI32ArrayAttr data,
- bool use_rr = true) -> std::optional<SmallVector<int64_t>> {
+ // When round-robin distribution (`use_rr`) is enabled, `shape[i]` can be
+ // smaller than `layout[i] * data[i]`, allowing multiple compute units to
+ // share the data.
+ auto tryDistribute = [&](llvm::ArrayRef<int64_t> shape,
+ DenseI32ArrayAttr layout, DenseI32ArrayAttr data,
+ bool use_rr =
+ true) -> std::optional<SmallVector<int64_t>> {
llvm::SmallVector<int64_t> newShape(shape);
if (layout) {
auto vec = llvm::to_vector_of<int64_t>(layout.asArrayRef());
@@ -91,8 +92,8 @@ bool XeGPUDialect::isEvenlyDistributable(llvm::ArrayRef<int64_t> shape,
auto instShape = maybeInstShape.value();
// check LaneLayout and LaneData
- auto maybeLaneShape = tryDistribute(instShape, attr.getLaneLayout(),
- attr.getLaneData(), false);
+ auto maybeLaneShape =
+ tryDistribute(instShape, attr.getLaneLayout(), attr.getLaneData(), false);
return maybeLaneShape.has_value();
}
``````````
</details>
https://github.com/llvm/llvm-project/pull/137226
More information about the Mlir-commits
mailing list