[Mlir-commits] [mlir] [mlir] Initial patch to add an MPI dialect (PR #68892)

Anton Lydike llvmlistbot at llvm.org
Wed Jan 31 02:36:20 PST 2024


AntonLydike wrote:

As @kiranchandramohan said:
> Could you summarise the answers to the questions in the guidelines on contributing a new dialect in the RFC? Apologies, if this has been done already, then a link will be useful.
> 
> https://mlir.llvm.org/getting_started/DeveloperGuide/#guidelines-on-contributing-a-new-dialect-or-important-components

Here are our answers as requested:

> What is the overall goal of the dialect?

An MPI dialect will provide first-class message-passing primitives that can be used both as a target by higher-level code generation and as a fixed point for other lower-level efforts such as MPI-level optimizations and rewrites.

> What is the first implementation milestone?

As a first implementation milestone, we will introduce ops for blocking communication and a lowering to llvm.

> How does it fit into the MLIR dialect ecosystem?

It fits in as a target-able dialect that abstracts away the messy MPI ABI and helps bridge the gap between pointers and MLIR's memref.

> Connection: how does it connect to the existing dialects in a compilation pipeline(s)?

The MPI dialect will work as a target for higher-level message-passing abstractions such as the sharding dialect. The lower end of a sharded linalg flow would encompass (with more intermediate steps) (linalg + sharding) -> (scf + memref + MPI) -> (llvm). We have demonstrated this approach by building a stencil compiler on a slightly different higher level message passing abstraction that goes (stencil) -> (stencil + message passing) -> (scf + memref + MPI) -> (llvm). 

> Consolidation: is there already a dialect with a similar goal or matching abstractions; if so, can it be improved instead of adding a new one?

We are unaware of any other dialect in MLIR filling a similar role to this one.

> Reuse: how does it generalize to similar but slightly different use cases?

While we focus on a target dialect for HPC code generation, other groups are interested in exploring a different space, such as MPI-level optimizations or translating point-to-point MPI calls to other forms of communication. We are confident that our abstractions can generalize to these use cases without a significant effort. We have collaborated with Jeff Hammond from the MPI ABI standardization body to ensure that our abstractions are a sensible expression of the MPI Interface.

> What is the community of users that it is serving?

The proposed MPI dialect makes MLIR accessible to the HPC community and provides MLIR's AI compiler community access to HPC-style distributed computing. MPI is a broadly established communication standard used in HPC and almost uniformly used as its communication abstraction.

> Who are the future contributors/maintainers beyond those who propose the dialect?

HPC DSL developers such as Frank Schlimbach at Intel are actively looking into MLIR, and Renato Golin at Intel agreed to co-maintain the dialect with us. Unlocking access to the MLIR ecosystem through the MPI dialect will ensure they can contribute to this dialect as part of their work, ensuring that the dialect will receive the needed support.


https://github.com/llvm/llvm-project/pull/68892


More information about the Mlir-commits mailing list