[Mlir-commits] [mlir] [MLIR][tensor] Improve `tensor.pack` verifier to catch more cases with unconditional runtime errors (PR #77217)

lorenzo chelini llvmlistbot at llvm.org
Mon Jan 8 11:55:25 PST 2024


chelini wrote:

> > Would it make sense to keep the padding behavior consistent between the static and dynamic behavior, having padding optional in both cases?
> 
> I dont think I understand this. Can you elaborate? I thought that's what I was doing. That is, in current `main`, when tile sizes are static we require padding when tile size doesn't divide input size. The proprosed changes here extend this to the dynamic tile case, but require padding when output size doesn't divide input size. That's consistent by my definition so obviously I'm missing something here.

I answered Nicolas's comment and argued that keeping the padding optional in the dynamic case would be better (unless I misunderstood his message). I see op verification this way: If padding is specified, we will pad along high dimensions to make the tile complete; the tile will divide the dimension and the IR is valid. If padding is not specified, we have two options:
- If we have enough static information to prove that a tile does not divide perfectly a dimension, the verifier should emit an error. As far as I can see in this PR, you are going in this direction: input and output are constant, and you are erroring out because the tile cannot fully divide the dimension unless we pad.
- If we have dynamic behaviors and it is impossible to prove that the tile perfectly divides the dimension, the verifier should not emit any error, and it is an undefined behavior at runtime if the tile does not divide unless the padding attribute is specified.

https://github.com/llvm/llvm-project/pull/77217


More information about the Mlir-commits mailing list