[Mlir-commits] [mlir] [mlir][linalg] convert arith ops to destination-passing-style. (PR #157854)
Andrzej WarzyĆski
llvmlistbot at llvm.org
Mon Sep 15 06:38:38 PDT 2025
banach-space wrote:
> > but could you share a repro so that we can see what the issue is?
>
> Thanks @banach-space for the review. Reproducer, yes here it is (also @matthias-springer is aware of this) -
>
> ```
> $ cat repro.mlir
> !qalias = !quant.uniform<i8:f32, 2.0:10>
> func.func @reproducer(%arg0: tensor<10xf32>) -> tensor<10xf32> {
> %0 = quant.qcast %arg0 : tensor<10xf32> to tensor<10x!qalias>
> %1 = quant.dcast %0 : tensor<10x!qalias> to tensor<10xf32>
> return %1 : tensor<10xf32>
> }
> ```
>
> Not limited to lower-quant but here is an example. When we run
>
> ```
> $ mlir-opt -lower-quant-ops -one-shot-bufferize repro.mlir
> repro.mlir:3:8: error: op was not bufferized
> %0 = quant.qcast %arg0 : tensor<10xf32> to tensor<10x!qalias> ^
> repro.mlir:3:8: note: see current operation: %4 = "arith.divf"(%arg0, %3) <{fastmath = #arith.fastmath<none>}> : (tensor<10xf32>, tensor<10xf32>) -> tensor<10xf32>
> ```
Thanks! What I had in mind is a more involved example which would demonstrate that your new transformation is indeed required to unblock bufferization. That would be a good test, no?
https://github.com/llvm/llvm-project/pull/157854
More information about the Mlir-commits
mailing list