[llvm] [docs] Strengthen our quality standards and connect AI contribution policy to it (PR #154441)

Jonas Devlieghere via llvm-commits llvm-commits at lists.llvm.org
Wed Oct 1 16:28:35 PDT 2025


================
@@ -0,0 +1,154 @@
+# LLVM AI Tool Use Policy
+
+LLVM's policy on AI-assisted tooling is fundamentally liberal -- We want to
+enable contributors to use the latest and greatest tools available. However,
+human oversight remains critical. **The contributor is always the author and is
+fully accountable for their contributions.**
+
+* **You are responsible for your contributions.** AI-generated content must be
+  treated as a suggestion, not as final code or text. It is your responsibility
+  to review, test, and understand everything you submit. Submitting unverified or
+  low-quality machine-generated content (sometimes called "[AI
+  slop][ai-slop]") creates an unfair review burden on the community and is not
+  an acceptable contribution. Contributors should review and understand their own
+  submissions before asking the community to review their code.
+
+* **Start with small contributions:** Open source communities operate on trust
+  and reputation. Reviewing large contributions is expensive, and AI tools tend
+  to generate large contributions. We encourage new contributors to keep their
+  first contributions small, specifically below 150 additional lines of
+  non-test code insertions, until they build personal expertise and maintainer
+  trust before taking on larger changes.
+
+* **Be transparent about your use of AI.** When a contribution has been
+  significantly generated by an AI tool, we encourage you to note this in your
+  pull request description, commit message, or wherever authorship is normally
+  indicated for the work. For instance, use a commit message trailer like
+  Assisted-by: <name of code assistant>. This transparency helps the community
+  develop best practices and understand the role of these new tools.
+
+* **LLVM values Your Voice.** Clear, concise, and authentic communication is
+  our goal. Using AI tools to translate your thoughts or overcome language
+  barriers is a welcome and encouraged practice, but keep in mind, we value your
+  unique voice and perspective.
+
+* **Limit AI Tools for Reviewing.** As with creating code, documentation, and
+  other contributions, reviewers may use AI tools to assist in providing
+  feedback, but not to wholly automate the review process. Particularly, AI
+  should not make the final determination on whether a contribution is accepted
+  or not. The same principle of ownership applies to review comment
+  contributions as it does to code contributions.
+
+[ai-slop]: https://simonwillison.net/2024/May/8/slop/
+
+This policy extends beyond code contributions and includes, but is not limited
+to, the following kinds of contributions:
+
+- Code, usually in the form of a pull request
+- RFCs or design proposals
+- Issues or security vulnerabilities
+- Comments and feedback on pull requests
+
+
+## Extractive Changes
+
+Sending patches, PRs, RFCs, comments, etc to LLVM, is not free -- it takes a
+lot of maintainer time and energy to review those contributions! We see the act
+of sending low-quality, un-self-reviewed contributions to the LLVM project as
+"extractive." It is an attempt to extract work from the LLVM project community
+in the form of review comments and mentorship, without the contributor putting
+in comensurate effort to make their contribution worth reviewing.
+
+Our **golden rule** is that a contribution should be worth more to the project
+than the time it takes to review it. These ideas are captured by this quote
+from the book [Working in Public][public] by Nadia Eghbal:
+
+[public]: https://press.stripe.com/working-in-public
+
+> \"When attention is being appropriated, producers need to weigh the costs and
+> benefits of the transaction. To assess whether the appropriation of attention
+> is net-positive, it's useful to distinguish between *extractive* and
+> *non-extractive* contributions. Extractive contributions are those where the
+> marginal cost of reviewing and merging that contribution is greater than the
+> marginal benefit to the project's producers. In the case of a code
+> contribution, it might be a pull request that's too complex or unwieldy to
+> review, given the potential upside.\" \-- Nadia Eghbal
+
+We encourage contributions that help sustain the project. We want the LLVM
+project to be welcoming and open to aspiring compiler engineers who are willing
+to invest time and effort to learn and grow, because growing our contributor
+base and recruiting new maintainers helps sustain the project over the long
+term. We therefore automatically post a greeting comment to pull requests from
+new contributors and encourage maintainers to spend their time to help new
+contributors learn.
+
+## Handling Violations
+
+If a maintainer judges that a contribution is *extractive* (i.e. it is
+generated with tool-assistance or simply requires significant revision), they
+should copy-paste the following response, add the `extractive` label if
----------------
JDevlieghere wrote:

I realize this isn't meant to be exhaustive, but should we also encourage to "request changes". The motivation being that it (1) clears it from my review queue and (2) sends a clear message to other reviewers not to bother with the patch. I can filter based on the `extractive` label but I think the "changes requested" is more obvious. 

https://github.com/llvm/llvm-project/pull/154441


More information about the llvm-commits mailing list