[PATCH] D142445: [mlir][tensor|memref] Harden the checks on dim op
    Mehdi AMINI via Phabricator via llvm-commits 
    llvm-commits at lists.llvm.org
       
    Thu Jan 26 15:57:20 PST 2023
    
    
  
mehdi_amini added a comment.
In D142445#4081964 <https://reviews.llvm.org/D142445#4081964>, @qcolombet wrote:
> Hi @mehdi_amini,
>
> After talking with @aartbik and sleeping on it, I think making 0-ranked dim op invalid makes sense. I don't believe we can produce them with optimizations.
Seems reasonable for now: producing this would involve a type change, which isn't a "normal" safe thing to do (which is why I wasn't sure about this).
The part that is a bit speculative that worried me a bit is how it creates an "edge case" make it not uniform to handle. For example take some pseudo-IR like this:
  // return the first dimension or 0 if rank == 0.
  func @unranked(%arg0 : memref<*xf32>) : index {
    %rank = memref.rank %arg0  : memref<*xf32>
    %zero = arith.constant 0 : index
    %rank_not_zero = arith.icmpi ne, %rank, %zero : (index, index) -> i1
    %res = scf.if (%rank) {
       %dim = memref.dim %arg0[0] : memref<*xf32>
       scf.yield %dim
    } else {
       scf.yield %zero
    }
    return %res
  }
This is fairly useless code, but just to illustrate my point: assume this is a generic / reusable routine part of a library.
Now when we integrate this in a larger program we could try to do some inlining and/or function specialization, to turn the unranked into ranked. So if you have a call-site with a rank-1 memref you would specialize as such:
  func @unranked1(%arg0 : memref<?xf32>) : index {
    %rank = memref.rank %arg0  : memref<?xf32>
    %zero = arith.constant 0 : index
    %rank_not_zero = arith.icmpi ne, %rank, %zero : (index, index) -> i1
    %res = scf.if (%rank) {
       %dim = memref.dim %arg0[0] : memref<?xf32>
       scf.yield %dim
    } else {
       scf.yield %zero
    }
    return %res
  }
which then can fold (`memref.rank` folds to 1):
  func @unranked1(%arg0 : memref<?xf32>) : index {
     %dim = memref.dim %arg0[0] : memref<?xf32>
    return %res
  }
But the same logic trying to specialize for rank 0 is impossible:
  func @unranked0(%arg0 : memref<f32>) : index {
    %rank = memref.rank %arg0  : memref<f32>
    %zero = arith.constant 0 : index
    %rank_not_zero = arith.icmpi ne, %rank, %zero : (index, index) -> i1
    %res = scf.if (%rank) {
       %dim = memref.dim %arg0[0] : memref<f32> // <= verifier error
       scf.yield %dim
    } else {
       scf.yield %zero
    }
    return %res
  }
Even though the folding would yield:
  func @unranked0(%arg0 : memref<f32>) : index {
    %zero = arith.constant 0 : index
    return %zero
  }
> Regarding the out-of-bound accesses for the non-0-ranked shapes, I will do a separate patch to remove the verifier checks and do something sensible (unreachable, ..., anything but crash).
Thanks!
Repository:
  rG LLVM Github Monorepo
CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D142445/new/
https://reviews.llvm.org/D142445
    
    
More information about the llvm-commits
mailing list