[Mlir-commits] [mlir] [mlir][nfc] Update vectorize-tensor-extract.mlir (2/N) (PR #119080)
Andrzej Warzyński
llvmlistbot at llvm.org
Tue Dec 10 07:31:22 PST 2024
banach-space wrote:
> The original check seems to be generated by a script.. Thanks for the improvements!
Thanks for taking a look! Note that you have approved patch 2/N - shall I assume that you are OK with 1/N as well?
> I have a dumb question: what does column tensor mean?
That's a good question and you can help me validate my taxonomy. Let's assume that our row/column tensors have been extracted from a 2D tensor (`tensor<6x6xi32>`).
**Row tensor**: `tensor<1x6xi32>`,
```
. . . . . .
. . . . . .
x x x x x x
. . . . . .
. . . . . .
. . . . . .
```
**Column tensor:** `tensor<6x1xi32>`.
```
x . . . . .
x . . . . .
x . . . . .
x . . . . .
x . . . . .
x . . . . .
```
Now, the vectorizer will assume that the underlying memory access pattern is:
* a "contiguous load" for row tensor (all elements to read are adjacent in "memory"),
* a "gather load" for column tensor (because there's a "row" of elements between consecutive elements to read).
Admittedly, one big weakness of this approach is that tensors do not ... live in memory 😅
Also, a "column tensor" could totally represent something that's contiguous in memory (e.g. when starting from e.g. `tensor<36x1xi32>` instead of `tensor<6x6xi32>`).
Hope this makes sense :)
https://github.com/llvm/llvm-project/pull/119080
More information about the Mlir-commits
mailing list