[llvm] [RISCV] Add scheduling information for SiFive VCIX (PR #86093)

Michal Terepeta via llvm-commits llvm-commits at lists.llvm.org
Tue Mar 26 04:39:45 PDT 2024


================
@@ -307,44 +307,44 @@ multiclass VPseudoVC_X<LMULInfo m, DAGOperand RS1Class,
                        Operand OpClass = payload2> {
   let VLMul = m.value in {
     let Defs = [VCIX_STATE], Uses = [VCIX_STATE] in {
-      def "PseudoVC_" # NAME # "_SE_" # m.MX : VPseudoVC_X<OpClass, RS1Class>;
-      def "PseudoVC_V_" # NAME # "_SE_" # m.MX : VPseudoVC_V_X<OpClass, m.vrclass, RS1Class>;
+      def "PseudoVC_" # NAME # "_SE_" # m.MX : VPseudoVC_X<OpClass, RS1Class>, Sched<[!cast<SchedWrite>("WriteVC_" # NAME # "_" # m.MX)]>;
----------------
michalt wrote:

Right, that makes sense for the "vector ALU ops".

The problem with VCIX is that there is absolutely no semantic information attached to them, nor is there any expectation on what different instructions will actually do. Vendors are free to implement any subset of the instructions in any way they want. They might take two instructions that have a similar name, and have them do widely different things. There is no expectation of any commonality across vendor implementations either. It's not unlike something like an interface in C++:

```cpp
class Vcix {
  virtual void a(int) = 0;
  virtual int b(int) = 0;
  virtual void c(int, int) = 0;
  virtual int d(int, int) = 0;
};
```

Based on the interface, there's no way of logically grouping the instructions (names are meaningless, there are no comments, etc.), so implementations of this interface can do absolutely arbitrary things and have nothing in common.

Going back to the actual VCIX instructions, I could group things according to what we do internally, but (a) that might very well change in future versions, (b) would be meaningless for all other implementations of the interface.

I do agree that for setting the X280 defaults in this PR, we don't actually need all these definitions. However, I hope to fork the X280 scheduling model internally and reuse the `SchedWrite`s to give more accurate latencies for our implementations.

Does that make sense?

https://github.com/llvm/llvm-project/pull/86093


More information about the llvm-commits mailing list