[Mlir-commits] [mlir] [mlir][xegpu] Relax rank restriction of TensorDescType (PR #145916)
Chao Chen
llvmlistbot at llvm.org
Mon Jun 30 11:10:48 PDT 2025
================
@@ -303,9 +303,7 @@ void XeGPUBlockingPass::runOnOperation() {
// If the encoding is a ScatterTensorDescAttr, we need to
// potentially adjust the chunk size based on the inst_data.
if (tdescTy.isScattered()) {
- auto scatterAttr =
- llvm::dyn_cast_if_present<xegpu::ScatterTensorDescAttr>(encoding);
- int64_t chunkSize = scatterAttr.getChunkSize().getInt();
+ int64_t chunkSize = tdescTy.getChunkSize();
----------------
chencha3 wrote:
> I mean now anyone with a tensor_desc can call this method. It not immediately clear from the API that it requires a scatter encoding. so anywhere you call this method you need to guard it by if `(tensorDesc.hasScattert())`, if not it as an unsafe call (assert will be removed in release build). So I don't see any direct benefit of exposing this to tensorDesc.
BTW why is assert removed in release? I think this guard is necessary for both cases. `tdenseDesc->ScatteredAttr->getChunkSize` also needs this. I think the only clean way to do so is to define two TensorDesc types respectively for blocked and scattered tensor descs, e.g., BlockedTensorDesc vs ScatteredTesnorDesc, instead of using attributes for them.
https://github.com/llvm/llvm-project/pull/145916
More information about the Mlir-commits
mailing list