[llvm] [AMDGPU] Add ML-oriented coexec scheduler selection and queue handling (PR #169616)

Austin Kerbow via llvm-commits llvm-commits at lists.llvm.org
Wed Mar 4 10:57:43 PST 2026


================
@@ -574,6 +575,58 @@ static cl::opt<std::string>
                         cl::desc("Select custom AMDGPU scheduling strategy."),
                         cl::Hidden, cl::init(""));
 
+enum class AMDGPUPostSchedStrategy {
+  Default,
+  Nop,
+};
+
+static StringRef getAMDGPUWorkloadType(const Module *M) {
+  if (!M)
+    return "";
+
+  auto *WorkloadType =
+      dyn_cast_or_null<MDString>(M->getModuleFlag("amdgpu-workload-type"));
+  if (!WorkloadType)
+    return "";
+
+  return WorkloadType->getString();
+}
----------------
kerbowa wrote:

I’m fine with this proposal, but I’d personally lean toward implementing a function like this in Triton, essentially offloading it to the frontend if this ends up being the form the control takes.

As a side note, I could also imagine ML or Triton specific controls extending beyond the scheduler at some point, as they have in the past.

https://github.com/llvm/llvm-project/pull/169616


More information about the llvm-commits mailing list