[llvm] [AArch64] Initial sched model for Neoverse N3 (PR #106371)

David Green via llvm-commits llvm-commits at lists.llvm.org
Fri Aug 30 10:09:00 PDT 2024


================
@@ -0,0 +1,2359 @@
+//=- AArch64SchedNeoverseN3.td - NeoverseN3 Scheduling Defs --*- tablegen -*-=//
+//
+// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
+// See https://llvm.org/LICENSE.txt for license information.
+// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
+//
+//===----------------------------------------------------------------------===//
+//
+// This file defines the scheduling model for the Arm Neoverse N3 processors.
+//
+//===----------------------------------------------------------------------===//
+
+def NeoverseN3Model : SchedMachineModel {
+    let IssueWidth            =  10; // Micro-ops dispatched at a time.
+    let MicroOpBufferSize     = 160; // Entries in micro-op re-order buffer. NOTE: Copied from N2.
+    let LoadLatency           =   4; // Optimistic load latency.
+    let MispredictPenalty     =  10; // Extra cycles for mispredicted branch. NOTE: Copied from N2.
+    let LoopMicroOpBufferSize =  16; // NOTE: Copied from Cortex-A57.
+    let CompleteModel         =   1;
+
+    list<Predicate> UnsupportedFeatures = !listconcat(SMEUnsupported.F,
+        [HasSVE2p1, HasPAuthLR, HasCPA, HasCSSC]);
+}
+
+//===----------------------------------------------------------------------===//
+// Define each kind of processor resource and number available on Neoverse N3.
+// Instructions are first fetched and then decoded into internal Macro-OPerations
+// (MOPs). From there, the MOPs proceed through register renaming and dispatch stages.
+// A MOP can be split into two Micro-OPerations (µOPs) further down the pipeline
+// after the decode stage. Once dispatched, µOPs wait for their operands and issue
+// out-of-order to one of thirteen issue pipelines. Each issue pipeline can accept
+// one µOP per cycle.
+
+let SchedModel = NeoverseN3Model in {
+
+// Define the (13) issue ports.
+def N3UnitB   : ProcResource<2>;  // Branch 0/1
+def N3UnitS   : ProcResource<2>;  // Integer Single-Cycle 0/1
+def N3UnitM0  : ProcResource<1>;  // Integer Single/Multi-Cycle 0
+def N3UnitM1  : ProcResource<1>;  // Integer Single/Multi-Cycle 1
+def N3UnitV0  : ProcResource<1>;  // FP/ASIMD 0
+def N3UnitV1  : ProcResource<1>;  // FP/ASIMD 1
+def N3UnitD   : ProcResource<2>;  // Integer Store data 0/1
+def N3UnitL01 : ProcResource<2>;  // Load/Store 0/1
+def N3UnitL2  : ProcResource<1>;  // Load 2
+
+def N3UnitI : ProcResGroup<[N3UnitS, N3UnitM0, N3UnitM1]>;
+def N3UnitM : ProcResGroup<[N3UnitM0, N3UnitM1]>;
+def N3UnitL : ProcResGroup<[N3UnitL01, N3UnitL2]>;
+def N3UnitV : ProcResGroup<[N3UnitV0, N3UnitV1]>;
+
+//===----------------------------------------------------------------------===//
+
+def : ReadAdvance<ReadI,       0>;
+def : ReadAdvance<ReadISReg,   0>;
+def : ReadAdvance<ReadIEReg,   0>;
+def : ReadAdvance<ReadIM,      0>;
+def : ReadAdvance<ReadIMA,     1, [WriteIM32, WriteIM64]>;
+def : ReadAdvance<ReadID,      0>;
+def : ReadAdvance<ReadExtrHi,  0>;
+def : ReadAdvance<ReadAdrBase, 0>;
+def : ReadAdvance<ReadST,      0>;
+def : ReadAdvance<ReadVLD,     0>;
+
+def : WriteRes<WriteAtomic,  []> { let Unsupported = 1; }
+def : WriteRes<WriteFDiv,    []> { let Unsupported = 1; }
----------------
davemgreen wrote:

I think I would copy what other models do. If there are new instructions added with these Write's (which granted might be unlikely for some), then the scheduling info would get some sensible default for the new instructions, even if the model doesn't know about them. (Although I can see your point about them being unused).

https://github.com/llvm/llvm-project/pull/106371


More information about the llvm-commits mailing list