[Mlir-commits] [mlir] [mlir][Linalg] Add syntax-highlighting in docs (PR #137646)

llvmlistbot at llvm.org llvmlistbot at llvm.org
Mon Apr 28 08:00:07 PDT 2025


llvmbot wrote:


<!--LLVM PR SUMMARY COMMENT-->

@llvm/pr-subscribers-mlir

Author: Andrzej Warzyński (banach-space)

<details>
<summary>Changes</summary>

With this small update we should gain MLIR syntax-highlighting in:
* https://mlir.llvm.org/docs/Dialects/Linalg/


---
Full diff: https://github.com/llvm/llvm-project/pull/137646.diff


1 Files Affected:

- (modified) mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td (+14-14) 


``````````diff
diff --git a/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td b/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
index b9edcc92e81a9..f3dbeb274deda 100644
--- a/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
+++ b/mlir/include/mlir/Dialect/Linalg/IR/LinalgStructuredOps.td
@@ -242,7 +242,7 @@ def MapOp : LinalgStructuredBase_Op<"map", [
     on the corresponding elements.
 
     Example:
-    ```
+    ```mlir
       %add = linalg.map
           ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>)
           outs(%init: tensor<64xf32>)
@@ -256,7 +256,7 @@ def MapOp : LinalgStructuredBase_Op<"map", [
     non-yield operation inside the body.
 
     The example above will be printed as:
-    ```
+    ```mlir
       %add = linalg.map { arith.addf }
           ins(%lhs, %rhs : tensor<64xf32>, tensor<64xf32>)
           outs(%init: tensor<64xf32>)
@@ -327,7 +327,7 @@ def ReduceOp : LinalgStructuredBase_Op<"reduce", [
     dimensions in increasing order.
 
     Example:
-    ```
+    ```mlir
       %reduce = linalg.reduce
           ins(%input:tensor<16x32x64xf32>)
           outs(%init:tensor<16x64xf32>)
@@ -343,7 +343,7 @@ def ReduceOp : LinalgStructuredBase_Op<"reduce", [
     takes `%out` as the first argument.
 
     The example above will be printed as:
-    ```
+    ```mlir
           %reduce = linalg.reduce { arith.addf }
           ins(%input:tensor<16x32x64xf32>)
           outs(%init:tensor<16x64xf32>)
@@ -408,7 +408,7 @@ def TransposeOp : LinalgStructuredBase_Op<"transpose", [
     operation only that produces a transposed "view".
 
     Example:
-    ```
+    ```mlir
       %transpose = linalg.transpose
           ins(%input:tensor<16x64xf32>)
           outs(%init:tensor<64x16xf32>)
@@ -480,7 +480,7 @@ def BroadcastOp : LinalgStructuredBase_Op<"broadcast", [
     Broadcast the input into the given shape by adding `dimensions`.
 
     Example:
-    ```
+    ```mlir
       %bcast = linalg.broadcast
           ins(%input:tensor<16xf32>)
           outs(%init:tensor<16x64xf32>)
@@ -689,7 +689,7 @@ def MatmulOp : LinalgStructuredBase_Op<"matmul", [
     the maps if specified.
 
     Example Transpose:
-    ```
+    ```mlir
     linalg.matmul indexing_maps = [
                    affine_map<(d0, d1, d2) -> (d2, d0)>, // transpose
                    affine_map<(d0, d1, d2) -> (d2, d1)>,
@@ -700,7 +700,7 @@ def MatmulOp : LinalgStructuredBase_Op<"matmul", [
      ```
 
     Example Broadcast:
-     ```
+    ```mlir
     linalg.matmul indexing_maps = [
                    affine_map<(d0, d1, d2) -> (d2)>,     // broadcast
                    affine_map<(d0, d1, d2) -> (d2, d1)>,
@@ -711,7 +711,7 @@ def MatmulOp : LinalgStructuredBase_Op<"matmul", [
     ```
 
     Example Broadcast and transpose:
-    ```
+    ```mlir
     linalg.matmul indexing_maps = [
                       affine_map<(d0, d1, d2) -> (d2, d0)>, // transpose
                       affine_map<(d0, d1, d2) -> (d2)>,     // broadcast
@@ -839,7 +839,7 @@ def ContractOp : LinalgStructuredBase_Op<"contract", [
     `H = ⟨ b, m, n ⟩` (with `k` as a contracting reduction-dimension while `m`,
     `n` and `b` have parallel iteration-type) and gets represented as:
 
-    ```
+    ```mlir
     %D = linalg.contract
         indexing_maps = [affine_map<(batch, m, n, k) -> (batch, m, k)>,
                          affine_map<(batch, m, n, k) -> (batch, k, n)>,
@@ -854,7 +854,7 @@ def ContractOp : LinalgStructuredBase_Op<"contract", [
     For example, the following is a variant of batch-matmul with a transposition
     applied to `A` while `B`'s 2D-matrix gets broadcasted along the batch dim:
 
-    ```
+    ```mlir
     linalg.contract
         indexing_maps = [affine_map<(batch, m, n, k) -> (batch, k, m)>,
                          affine_map<(batch, m, n, k) -> (k, n)>,
@@ -953,7 +953,7 @@ def BatchMatmulOp : LinalgStructuredBase_Op<"batch_matmul", !listconcat([AttrSiz
     arguments if specified.
 
     Example Transpose:
-    ```
+    ```mlir
     linalg.batch_matmul indexing_maps = [
                    affine_map<(d0, d1, d2, d3) -> (d0, d3, d1)>, // transpose
                    affine_map<(d0, d1, d2, d3) -> (d0, d3, d2)>,
@@ -964,7 +964,7 @@ def BatchMatmulOp : LinalgStructuredBase_Op<"batch_matmul", !listconcat([AttrSiz
     ```
 
     Example Broadcast:
-    ```
+    ```mlir
     linalg.batch_matmul indexing_maps = [
                        affine_map<(d0, d1, d2, d3) -> (d3)>,         // broadcast
                        affine_map<(d0, d1, d2, d3) -> (d0, d3, d2)>,
@@ -975,7 +975,7 @@ def BatchMatmulOp : LinalgStructuredBase_Op<"batch_matmul", !listconcat([AttrSiz
     ```
 
     Example Broadcast and Transpose:
-    ```
+    ```mlir
     linalg.batch_matmul indexing_maps = [
                        affine_map<(d0, d1, d2, d3) -> (d1, d3)>,     // broadcast
                        affine_map<(d0, d1, d2, d3) -> (d0, d2, d3)>, // transpose

``````````

</details>


https://github.com/llvm/llvm-project/pull/137646


More information about the Mlir-commits mailing list