[Mlir-commits] [mlir] [mlir][py] Add NVGPU's `TensorMapDescriptorType` in py bindings (PR #88855)

Guray Ozen llvmlistbot at llvm.org
Tue Apr 16 02:26:09 PDT 2024


================
@@ -9,5 +9,23 @@
 #include "mlir-c/Dialect/NVGPU.h"
 #include "mlir/CAPI/Registration.h"
 #include "mlir/Dialect/NVGPU/IR/NVGPUDialect.h"
+#include "mlir/IR/BuiltinTypes.h"
+
+using namespace mlir;
+using namespace mlir::nvgpu;
 
 MLIR_DEFINE_CAPI_DIALECT_REGISTRATION(NVGPU, nvgpu, mlir::nvgpu::NVGPUDialect)
+
+bool mlirTypeIsANVGPUTensorMapDescriptorType(MlirType type) {
+  return isa<nvgpu::TensorMapDescriptorType>(unwrap(type));
+}
+
+MlirType mlirNVGPUTensorMapDescriptorTypeGet(MlirContext ctx,
+                                             MlirType tensorType, int swizzle,
+                                             int l2promo, int oob,
+                                             int interleave) {
+  return wrap(nvgpu::TensorMapDescriptorType::get(
+      unwrap(ctx), cast<MemRefType>(unwrap(tensorType)),
----------------
grypp wrote:

Agreed, but here the term `tensor` originates from the CUDA driver API, specifically [cuTensorMapEncodeTiled](https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__TENSOR__MEMORY.html#group__CUDA__TENSOR__MEMORY_1ga7c7d2aaac9e49294304e755e6f341d7). While I could potentially transition the 'MemRefType' to 'TensorType' in the Op, it might appear strange since nvgpu operates on memrefs. 
Perhaps we're overload of the term `tensor` a bit too far.

https://github.com/llvm/llvm-project/pull/88855


More information about the Mlir-commits mailing list