[Mlir-commits] [mlir] Add a tutorial on mlir-opt (PR #96105)
Mehdi Amini
llvmlistbot at llvm.org
Wed Jun 19 14:18:45 PDT 2024
================
@@ -0,0 +1,432 @@
+# Using `mlir-opt`
+
+`mlir-opt` is the command-line entry point for running passes and lowerings on MLIR code.
+This tutorial will explain how to use `mlir-opt` to run passes, and explain
+some details about MLIR's built-in dialects along the way.
+
+Prerequisites:
+
+- [Building MLIR from source](/getting_started/)
+
+[TOC]
+
+## Overview
+
+We start with a brief summary of context that helps to frame
+the uses of `mlir-opt` detailed in this article.
+For a deeper dive on motivation and design,
+see [the MLIR paper](https://arxiv.org/abs/2002.11054).
+
+Two of the central concepts in MLIR are *dialects* and *lowerings*.
+In traditional compilers, there is typically one "dialect,"
+called an *intermediate representation*, or IR,
+that is the textual or data-structural description of a program
+within the scope of the compiler's execution.
+For example, in GCC the IR is called GIMPLE,
+and in LLVM it's called LLVM-IR.
+Compilers typically convert an input program to their IR,
+run optimization passes,
+and then convert the optimized IR to machine code.
+
+MLIR's philosophy is to split the job into smaller steps.
+First, MLIR allows one to define many IRs called *dialects*,
+some considered "high level" and some "low level,"
+but each with a set of types, operations, metadata,
+and semantics that defines what the operations do.
+Different dialects may coexist in the same program.
+Then, one writes a set of *lowering passes*
+that incrementally converts different parts of the program
+from higher level dialects to lower and lower dialects
+until you get to machine code
+(or, in many cases, LLVM, which finishes the job).
+Along the way,
+*optimizing passes* are run to make the code more efficient.
+The main point here is that the high level dialects exist
+*so that* they make it easy to write these important optimizing passes.
+
+A central motivation for building MLIR
+was to build the `affine` dialect,
+which is designed to enable [polyhedral optimizations](https://polyhedral.info/)
+for loop transformations.
+Compiler engineers had previously implemented polyhedral optimizations
+in LLVM and GCC (without an `affine` dialect),
+and it was difficult because they had to reconstruct well-structured loop nests
+from a much more complicated set of low-level operations.
+Having a higher level `affine` dialect preserves the loop nest structure
+at an abstraction layer that makes it easier to write optimizations,
+and then discards it during lowering passes.
----------------
joker-eph wrote:
I'm not convinced that the two previous paragraphs are desirable here. We can keep it simple I believe:
```
Compilers are operating on a representation of the program, commonly
referred to as "IR" (Intermediate Representation). To keep the compiler
testable and understandable, in general we organize transformations as
separate "passes". A pass takes some IR as input, and mutate it to produce
a new IR.
In MLIR, the IR exists under 3 forms that are equivalent: in-memory, textual,
and bytecode. The in-memory format is the data-structure that the passes
are operating on, the textual format is used for debugging and testing, and
the bytecode for general serialization. MLIR allows to convert from any form
to another.
```
Even this seems like it belongs to a general tutorial more than a MLIR-opt one.
https://github.com/llvm/llvm-project/pull/96105
More information about the Mlir-commits
mailing list