[llvm] [docs] Strengthen our quality standards and connect AI contribution policy to it (PR #154441)
    via llvm-commits 
    llvm-commits at lists.llvm.org
       
    Fri Oct 24 09:22:25 PDT 2025
    
    
  
================
@@ -0,0 +1,154 @@
+# LLVM AI Tool Use Policy
+
+LLVM's policy on AI-assisted tooling is fundamentally liberal -- We want to
+enable contributors to use the latest and greatest tools available. However,
+human oversight remains critical. **The contributor is always the author and is
+fully accountable for their contributions.**
+
+* **You are responsible for your contributions.** AI-generated content must be
+  treated as a suggestion, not as final code or text. It is your responsibility
+  to review, test, and understand everything you submit. Submitting unverified or
+  low-quality machine-generated content (sometimes called "[AI
+  slop][ai-slop]") creates an unfair review burden on the community and is not
+  an acceptable contribution. Contributors should review and understand their own
+  submissions before asking the community to review their code.
+
+* **Start with small contributions:** Open source communities operate on trust
+  and reputation. Reviewing large contributions is expensive, and AI tools tend
+  to generate large contributions. We encourage new contributors to keep their
+  first contributions small, specifically below 150 additional lines of
+  non-test code insertions, until they build personal expertise and maintainer
+  trust before taking on larger changes.
+
+* **Be transparent about your use of AI.** When a contribution has been
+  significantly generated by an AI tool, we encourage you to note this in your
+  pull request description, commit message, or wherever authorship is normally
+  indicated for the work. For instance, use a commit message trailer like
+  Assisted-by: <name of code assistant>. This transparency helps the community
+  develop best practices and understand the role of these new tools.
----------------
cmtice wrote:
Not sure if you would want to mention it, but it can also help build trust in usimg these tools. Maybe? ("This nice PR was assisted by such-and-so tool -- maybe it's not so bad for that tool to help with PRs...") 
https://github.com/llvm/llvm-project/pull/154441
    
    
More information about the llvm-commits
mailing list