[PATCH] D76244: [mlir][spirv] Make SPIRVTypeConverter target environment aware

Lei Zhang via Phabricator via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 17 07:25:12 PDT 2020


antiagainst added inline comments.


================
Comment at: mlir/include/mlir/Dialect/SPIRV/SPIRVLowering.h:38
+///
+/// TODO(antiagainst): We might want to introduce a way to control how
+/// unsupported bitwidth are handled and explicitly fail if wanted.
----------------
mravishankar wrote:
> Cant we do this right now? If any tests failed we can update the extension/capability accordingly and not just force expand right?
I'd prefer to defer this given it's not needed immediately and this commit is already ~1000 LOC change. The TODO here is not meant for tests. For tests where we care about the type conversions, we should probably test both with and without capabilities. For tests where we don't care about type conversions, it does not matter. This is for real-world use cases. Given a hardware implementation, the extension/capabilities are already predetermined; adding more does not work and we must dance inside the boundaries here. I'd assume this is the majority of the use cases.


================
Comment at: mlir/lib/Dialect/SPIRV/SPIRVLowering.cpp:285
+
+static Optional<Type> convertTensorType(const spirv::TargetEnv &targetEnv,
+                                        TensorType type) {
----------------
mravishankar wrote:
> I understand that this is maintaining functionality. I am actually not sure it is a good thing to lower from TensorType directly into SPIR-V type. SPIRV Array Type is not an SSA data-structure. It is a load/store data-structure. Please remove this. We should never have supported this in the first place.
There is complication here. The type conversion was introduced for lowering constant tensors (https://reviews.llvm.org/D73022). In SPIR-V one creates composite (vector/matrix/array/struct) constants with `OpConstantComposite` to embed relative large constant values and use `OpCompositeExtract` and `OpCompositeInsert`. So array types are not only just used in the load/store way; they can also be used in an SSA way like what we do for vectors. 

I guess the real problem is that in some frontend dialect (like TensorFlow) all things are represented as tensors, even for scalars. So this forced the tensor abstraction down the stack sometimes, especially during every development stage. I would think it's better to actually turn some constant tensors into constant vectors in a pre-pass so it's cleaner.

Added some comments.


Repository:
  rG LLVM Github Monorepo

CHANGES SINCE LAST ACTION
  https://reviews.llvm.org/D76244/new/

https://reviews.llvm.org/D76244





More information about the llvm-commits mailing list