[PATCH] D74012: [mlir][spirv] Use spv.entry_point_abi in GPU to SPIR-V conversions

Stephan Herhut via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Thu Feb 6 02:43:55 PST 2020


herhut added a comment.

In D74012#1859486 <https://reviews.llvm.org/D74012#1859486>, @antiagainst wrote:

> In D74012#1858709 <https://reviews.llvm.org/D74012#1858709>, @mravishankar wrote:
>
> > I understand what the intent is here, but the input already has an attribute that belongs to the SPIR-V dialect before lowering.
>
>
> Generally I think it is inevitable that we'll have attributes belonging to lower layers attached at the source input at a higher layer, for example we will have `spv.target_env` and `spv.interface_var_abi` attributes attached to dispatch regions isolated by IREE at HLO level. The root problem is that we cannot infer everything a lower dialect needs all from higher dialects and it does not make sense to create duplicates all the way up to every layer. But with that said,
>
> > That makes things a bit non-composable. In cases where someone lowers to the GPU dialect and then conditionally decides to lower to SPIR-V dialect or the NVVM dialect, with this change on the SPIR-V side a separate pass will be needed to add this attribute.  Ideally the input should be **only** in GPU dialect, whereas here it isnt.
>
> Here we are at the boundary between GPU dialect and SPIR-V dialect; so it should be fine to have SPIR-V specific stuff attached to the input to drive conversions towards SPIR-V. But I get your idea here regarding non-composability. If the input is, say, loops, it's better to have a proper GPU dialect attribute instead of `spv.entry_point_abi` attached to loops for driving further conversion.
>
> > Is it possible instead to add an attribute to GPU dialect itself which contains information about the workgroup size. Then while lowering we can convert one attribute to another.
>
> Yeah that makes sense to me. At GPU level we also have such concepts so we can have similar attributes; they are just contracts to different layers. Right now we are using a bunch of command-line options for doing that job; I'd love to see us switching to use attributes there too. I've created https://llvm.discourse.group/t/using-attributes-to-specify-workgroup-configuration-when-lowering-to-gpu/496 to RFC. I view that as an upper layer above SPIR-V so it's a bit separated from the changes here IMHO.


Maybe I am missing something here, but from the GPU dialect, the sizes are passed to the `gpu.launch`, so you can take them from there. If you want to specialize a kernel for specific sizes, you need to ensure compatible call sites, like in other function specialization. Is this more about driving upper layers of code generation so that you end up with a `gpu.launch` that has sizes you want? Or do you want to make `gpu.func` usable independent of the `gpu.launch`?


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D74012/new/

https://reviews.llvm.org/D74012





More information about the llvm-commits mailing list