[Mlir-commits] [mlir] [mlir] Target Description and Cost Model in MLIR (PR #85141)

Mehdi Amini llvmlistbot at llvm.org
Wed Mar 20 14:21:55 PDT 2024


================
@@ -240,6 +241,9 @@ class MLIRContext {
   /// (attributes, operations, types, etc.).
   llvm::hash_code getRegistryHash();
 
+  /// Get context-specific system description
+  SystemDesc &getSystemDesc();
----------------
joker-eph wrote:

> where does that target id live - op/func/region/module - and how does that affect passes?

Isn't this something you have to solve anyway? We've been through this with DLTI already, I'm not sure about a better alternative, but it has to be in the IR somehow IMO (when you have a gpu module nested in a host module, they have two different targets, and two gpu modules with two different targets can be present as well).

> who populates the target id as attributes? How long do they live? where does the actual map lives?

I don't quite follow the question, mostly because DLTI or GPU targets interfaces are design point that already have answers for this. Meaning if you have an interface, anyone can provide it, the actual instance lives in the context (as every attribute do) in some fashion (not as a new first class concepts, but using all the existing storage we already have).

> do we want the maps to be a global? context? passed through as args?

I don't see how a globals make sense, we preserved MLIR from global state as much as possible and I'm skeptical we need to introduce some. The Context is a possibility for the storage, but as mentioned above it is already extensible: I don't expect any modification to the MLIRContext at the moment.
"passed as args" isn't an infra question: it become a tooling question. That is "at the edge" you can have something that translate some args into the initialized state (this is how we handle a lot of the options), but the infra never actually **need** any cl::opt parsing for setting up the infra.

> This problem is so large that it's virtually impossible to have a PR with a running example that covers all cases. 

This is not what I'm asking, I'm asking for a doc (not code) and not something that covers all cases, but just one single case that we can play with "on paper". Like starts simple: let's say we have a simple TOSA model you want to codegen for a particular modern CPU: we can make up some conceptual stages of the lowering, and look at some of the interactions between these lowering and a hypothetical target interface.

https://github.com/llvm/llvm-project/pull/85141


More information about the Mlir-commits mailing list