[Mlir-commits] [mlir] [mlir][Transforms] Dialect conversion: Make materializations optional (PR #104668)
    llvmlistbot at llvm.org 
    llvmlistbot at llvm.org
       
    Wed Aug 28 23:03:51 PDT 2024
    
    
  
MaheshRavishankar wrote:
I think this change broke some of the passes downstream. I definitely know that the [`TypePropagationPass`](https://github.com/iree-org/iree/blob/main/compiler/src/iree/compiler/Codegen/Common/TypePropagationPass.cpp) is broken by this change. This could be because of an error in the pass itself where it was using the dialect conversion (specifically the region signature conversion) in the wrong way. I dont have an MLIR only repro, but I have an IREE repro
```
func.func @_select_dispatch_0_elementwise_4_i1xi1xi32xi32xi32() {
  %c0 = arith.constant 0 : index
  %0 = hal.interface.binding.subspan layout(<bindings = [#hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, Indirect>], flags = Indirect>) binding(0) alignment(64) offset(%c0) flags("ReadOnly|Indirect") : !flow.dispatch.tensor<readonly:tensor<4xi8>>
  %1 = hal.interface.binding.subspan layout(<bindings = [#hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, Indirect>], flags = Indirect>) binding(1) alignment(64) offset(%c0) flags("ReadOnly|Indirect") : !flow.dispatch.tensor<readonly:tensor<4xi8>>
  %2 = hal.interface.binding.subspan layout(<bindings = [#hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, Indirect>], flags = Indirect>) binding(2) alignment(64) offset(%c0) flags("ReadOnly|Indirect") : !flow.dispatch.tensor<readonly:tensor<4xi32>>
  %3 = hal.interface.binding.subspan layout(<bindings = [#hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, Indirect>], flags = Indirect>) binding(3) alignment(64) offset(%c0) flags("ReadOnly|Indirect") : !flow.dispatch.tensor<readonly:tensor<4xi32>>
  %4 = hal.interface.binding.subspan layout(<bindings = [#hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, "ReadOnly|Indirect">, #hal.pipeline.binding<storage_buffer, Indirect>], flags = Indirect>) binding(4) alignment(64) offset(%c0) flags(Indirect) : !flow.dispatch.tensor<writeonly:tensor<4xi32>>
  %5 = flow.dispatch.tensor.load %0, offsets = [0], sizes = [4], strides = [1] : !flow.dispatch.tensor<readonly:tensor<4xi8>> -> tensor<4xi8>
  %6 = arith.trunci %5 : tensor<4xi8> to tensor<4xi1>
  %7 = flow.dispatch.tensor.load %1, offsets = [0], sizes = [4], strides = [1] : !flow.dispatch.tensor<readonly:tensor<4xi8>> -> tensor<4xi8>
  %8 = arith.trunci %7 : tensor<4xi8> to tensor<4xi1>
  %9 = flow.dispatch.tensor.load %2, offsets = [0], sizes = [4], strides = [1] : !flow.dispatch.tensor<readonly:tensor<4xi32>> -> tensor<4xi32>
  %10 = flow.dispatch.tensor.load %3, offsets = [0], sizes = [4], strides = [1] : !flow.dispatch.tensor<readonly:tensor<4xi32>> -> tensor<4xi32>
  %11 = tensor.empty() : tensor<4xi32>
  %12 = linalg.generic {indexing_maps = [affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>, affine_map<(d0) -> (d0)>], iterator_types = ["parallel"]} ins(%6, %8, %9, %10 : tensor<4xi1>, tensor<4xi1>, tensor<4xi32>, tensor<4xi32>) outs(%11 : tensor<4xi32>) {
  ^bb0(%in: i1, %in_0: i1, %in_1: i32, %in_2: i32, %out: i32):
    %13 = arith.cmpi ugt, %in, %in_0 : i1
    %14 = arith.select %13, %in_1, %in_2 : i32
    linalg.yield %14 : i32
  } -> tensor<4xi32>
  flow.dispatch.tensor.store %12, %4, offsets = [0], sizes = [4], strides = [1] : tensor<4xi32> -> !flow.dispatch.tensor<writeonly:tensor<4xi32>>
  return
}
```
fails when this change is used in IREE and compiled with 
```
iree-opt --pass-pipeline="builtin.module(func.func(iree-flow-type-propagation))" repro.mlir
```
I think it also breaks some EmitC passes (I am checking that in https://github.com/iree-org/iree/pull/18384). 
Leaving a note here. For now I am trying to revert this change locally to IREE and will come back to triage this further.
https://github.com/llvm/llvm-project/pull/104668
    
    
More information about the Mlir-commits
mailing list