<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Fantastic, thank you!<div class=""><br class=""></div><div class="">-Chris<br class=""><div><br class=""><blockquote type="cite" class=""><div class="">On Oct 7, 2019, at 1:17 AM, Tanya Lattner via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">On behalf of the LLVM Foundation board of directors, we accept MLIR as a project into LLVM. This is based upon the responses that the community is supportive and is in favor of this. We will provide services and support on our side.<div class=""><br class=""></div><div class="">Welcome MLIR!<br class=""><div class=""><br class=""></div><div class="">Thanks,</div><div class="">Tanya Lattner</div><div class="">President, LLVM Foundation</div><div class=""><div class=""><br class=""><blockquote type="cite" class=""><div class="">On Sep 9, 2019, at 8:30 AM, Chris Lattner via llvm-dev <<a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><meta http-equiv="Content-Type" content="text/html; charset=utf-8" class=""><div style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class="">Hi all,<br class=""><br class="">The TensorFlow team at Google has been leading the charge to build a new set of compiler infrastructure, known as the <a href="https://github.com/tensorflow/mlir" class="">MLIR project</a>. The initial focus has been on machine learning infrastructure, high performance accelerators, heterogeneous compute, and HPC-style computations. That said, the implementation and design of this infrastructure is state of the art, is not specific to these applications, and is already being adopted (e.g.) by <a href="https://llvm.org/devmtg/2019-10/talk-abstracts.html#tech19" class="">the Flang compiler</a>. If you are interested in learning more about MLIR and the technical design, I’d encourage you to look at the MLIR Keynote and Tutorial at the last <a href="http://llvm.org/devmtg/2019-04/" class="">LLVM Developer Meeting</a>.<br class=""><br class="">MLIR is already <a href="https://medium.com/tensorflow/mlir-a-new-intermediate-representation-and-compiler-framework-beba999ed18d" class="">open source on GitHub</a>, and includes a significant amount of code in two repositories. “MLIR Core” is located in <a href="https://github.com/tensorflow/mlir" class="">github/tensorflow/mlir</a>, including an application independent IR, the code generation infrastructure, common graph transformation infrastructure, declarative operation definition and rewrite infrastructure, polyhedral transformations etc. The primary TensorFlow repository at <a href="https://github.com/tensorflow/tensorflow/" class="">github/tensorflow/tensorflow</a> contains TensorFlow-specific functionality built using MLIR Core infrastructure.<br class=""><br class="">In discussions with a <a href="https://blog.google/technology/ai/mlir-accelerating-ai-open-source-infrastructure/" class="">large number of industry partners</a>, we’ve achieved consensus that it would be best to build a shared ML compiler infrastructure under a common umbrella with well known neutral governance. As such, we’d like to propose that MLIR Core join the non-profit LLVM Foundation as a new subproject! We plan to follow the <a href="http://llvm.org/docs/DeveloperPolicy.html" class="">LLVM Developer Policy</a>, and have been following an LLVM-style development process from the beginning - including all relevant coding and testing styles, and we build on core LLVM infrastructure pervasively.<br class=""><br class="">We think that MLIR is a nice complement to existing LLVM functionality, providing common infrastructure for higher level optimization and transformation problems, and dovetails naturally with LLVM IR optimizations and code generation. Please let us know if you have any thoughts, questions, or concerns!<br class=""><br class="">-Chris<br class=""><br class=""></div>_______________________________________________<br class="">LLVM Developers mailing list<br class=""><a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a><br class=""><a href="https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev" class="">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev</a><br class=""></div></blockquote></div><br class=""></div></div></div>_______________________________________________<br class="">LLVM Developers mailing list<br class=""><a href="mailto:llvm-dev@lists.llvm.org" class="">llvm-dev@lists.llvm.org</a><br class="">https://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev<br class=""></div></blockquote></div><br class=""></div></body></html>