[PATCH] D155306: [mlir][ArmSME] Add tile load op and extend tile store tile size support

Andrzej Warzynski via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Fri Jul 21 01:11:40 PDT 2023


awarzynski accepted this revision.
awarzynski added a comment.

LGTM, thanks for the updates!



================
Comment at: mlir/lib/Conversion/VectorToArmSME/VectorToArmSME.cpp:116
+      vector::LoadOp>::VectorLoadStoreToArmSMELowering;
+};
+
----------------
dcaballe wrote:
> why do we need these specializations?
I do see _why_, but I agree with Diego that it's not immediately clear. 


================
Comment at: mlir/test/Integration/Dialect/Vector/CPU/ArmSME/vector-load-store.mlir:77
+
+  // Verify "mem1" == "mem2"
+  %init_1 = arith.constant 1 : i64
----------------
c-rhodes wrote:
> awarzynski wrote:
> > Having a "negative" would be good too (i.e. verify that `mem1 != mem2`).
> > Having a "negative" would be good too (i.e. verify that `mem1 != mem2`).
> 
> After zeroing mem2?
That comment was for:

>   // Verify "mem1" == "mem2"

I am suggesting a negative test, because current testing won't capture scenarios where something goes badly wrong. For example, when `mem1` and `mem2` end up pointing to the same memory location. Or, put differently, we can make sure that two mem buffers are identical. Can we make sure that they are different?

This something to consider adding, not a blocker. The testing in this patch is already pretty through.


CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D155306/new/

https://reviews.llvm.org/D155306



More information about the llvm-commits mailing list