Skip to content

Add an LLM policy for rust-lang/rust#1040

Open
jyn514 wants to merge 49 commits into
rust-lang:masterfrom
jyn514:llm-policy
Open

Add an LLM policy for rust-lang/rust#1040
jyn514 wants to merge 49 commits into
rust-lang:masterfrom
jyn514:llm-policy

Conversation

@jyn514
Copy link
Copy Markdown
Member

@jyn514 jyn514 commented Apr 17, 2026

View all comments

FCP link

this comment

Summary

This document establishes a policy for how LLMs can be used when contributing to rust-lang/rust. Subtrees, submodules, and dependencies from crates.io are not in scope. Other repositories in the rust-lang organization are not in scope.

This policy is intended to live in Forge as a living document, not as a dead RFC. It will be linked from CONTRIBUTING.md in rust-lang/rust as well as from the rustc- and std-dev-guides.

Ethical issues

See this thread.

Moderation guidelines

This PR is preceded by an enormous amount of discussion on Zulip. Almost every conceivable angle has been discussed to death; there have been upwards of 3000 messages, not even counting discussion on GitHub. We initially doubted whether we could reach consensus at all.

Therefore, we ask to bound the scope of this PR specifically to the policy itself. In particular, we mark several topics as out of scope below. We still consider these topics to be important, we simply do not believe this is the right place to discuss them.

So, the following are considered off topic for this PR specifically:

  • Long-term social or economic impact of LLMs
  • The environmental impact of LLMs
  • Anything to do with the copyright status of LLM output. We have gotten initial confirmation from US lawyers that LLM output likely doesn't cause issues under US law. We still need to confirm this with EU lawyers and in other parts of the world.
  • Moral judgements about people who use LLMs

We have asked the moderation team to help us enforce these rules. For an extended rationale, please see this comment.

Feedback guidelines

We are aware that parts of this policy will make some people very unhappy. As you are reading, we ask you to consider the following.

  • Can you think of a concrete improvement to the policy that addresses your concern? Consider:
    • Whether your change will make the policy harder to moderate
    • Whether your change will make it harder to come to a consensus
  • Does your concern need to be addressed before merging or can it be addressed in a follow-up?
    • Keep in mind the cost of not creating a policy.

If your concern is for yourself or for your team

  • What are the specific parts of your workflow that will be disrupted?
    • In particular we are only interested in workflows involving rust-lang/rust. Other repositories are not affected by this policy and are therefore not in scope.
  • Can you live with the disruption? Is it worth blocking the policy over?

Previous versions of this document were discussed on Zulip, and we have made edits in responses to suggestions there.

Motivation

  • Many people find LLM-generated code and writing deeply unpleasant to read or review.
  • Many people find LLMs to be a significant aid to learning and discovery.
  • rust-lang/rust is currently dealing with a deluge of low-effort "slop" PRs primarily authored by LLMs.
    • Having a policy makes these easier to moderate, without having to take every single instance on a case-by-case basis.

This policy is not intended as a debate over whether LLMs are a good or bad idea, nor over the long-term impact of LLMs. It is only intended to set out the future policy of rust-lang/rust itself.

Drawbacks

  • This bans some valid usages of LLMs. We intentionally err on the side of banning too much rather than too little in order to make the policy easy to understand and moderate.
  • This intentionally does not address the moral, social, and environmental impacts of LLMs. These topics have been extensively discussed on Zulip without reaching consensus, but this policy is relevant regardless of the outcome of these discussions.
  • This intentionally does not attempt to set a project-wide policy. We have attempted to come to a consensus for upwards of a month without significant progress. We are cutting our losses so we can have something rather than adhoc moderation decisions.
  • This intentionally does not apply to subtrees of rust-lang/rust. We don't have the same moderation issues there, so we don't have time pressure to set a policy in the same way.

Rationale and alternatives

  • We could create a project-wide policy, rather than scoping it to rust-lang/rust. This has the advantage that everyone knows what the policy is everywhere, and that it's easy to make things part of the mono-repo at a later date. It has the disadvantage that we think it is nigh-impossible to get everyone to agree. There are also reasons for teams to have different policies; for example, the standard for correctness is much higher within the compiler than within Clippy.
  • We could have different standards for people in the Rust project than for new contributors. That would make moderation much easier, and allow us to experiment with additional LLM use. However, it reinforces existing power structures, creates more of a gap between authors and reviewers, and feels "unfriendly" to new contributors.
  • We could have a more lenient policy that allows "responsible and appropriate" use of LLMs. This raises the question of what "responsible and appropriate" means. The usual suggestion is "self-review, and judging the change by the same standard as any other change"; but this neglects the reputational and social harm of work that "feels" LLM generated. It also makes our moderation policy much harder to understand, and increases the likelihood of re-litigating each moderation decision.
  • We could have a more strict policy that removes the threshold of originality condition. This has the advantage that our policy becomes easier to moderate and understand. It has the disadvantage that it becomes easy for people to intend to follow the policy, but be put in a position where their only choices are to either discard the PR altogether, rewrite it from scratch, or tell "white lies" about whether an LLM was involved.
  • We could have a more strict policy that bans LLMs altogether. It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.
  • We could have no policy at all. This avoids banning valid use cases; avoids implicitly legitimizing the use of LLMs by setting a policy; and saves all of us a great deal of time and effort. However, it greatly increases the baseline level of distrust within the project; makes review assignment even more of a dice roll than it is already; wastes a great deal of contributor and moderator time dealing with LLM-authored PRs on a case-by-case basis; and gives no guidance for people who really do want to use an LLM in a responsible way.

Prior art

This prior art section is taken almost entirely from Jane Lusby's summary of her research, although we have taken the liberty of moving the Rust project's prior art to the top. We thank her for her help.

Rust

Other organizations

These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.

  • full ban
    • postmarketOS - also explicitly bans encouraging others to use AI for solving problems related to postmarketOS - multi point ethics based rational with citations included
    • zig
      • philosophical, cites Profession (novella)
      • rooted in concerns around the construction and origins of original thought
    • servo
      • more pragmatic, directly lists concerns around ai, fairly concise
    • qemu
      • pragmatic, focuses on copyright and licensing concerns
      • explicitly allows AI for exploring api, debugging, and other non generative assistance, other policies do not explicitly ban this or mention it in any way
    • forgejo
      • bans AI for review, code, documentation, and communication
      • mentions "legal uncertainties" as a motivating factor
      • explicitly excludes machine translation
  • allowed with supervision, human is ultimately responsible
    • scipy
      • strict attribution policy including name of model
    • llvm
    • blender
    • linux kernel
      • quite concise but otherwise seems the same as many in this category
    • mesa
      • framed as a contribution policy not an AI policy, AI is listed as a tool that can be used but emphasizes same requirements that author must understand the code they contribute, seems to leave room for partial understanding from new contributors.

        Understand the code you write at least well enough to be able to explain why your changes are beneficial to the project.

    • firefox
    • ghostty
      • pro-AI but views "bad users" as the source of issues with it and the only reason for what ghostty considers a "strict AI policy"
    • fedora
      • clearly inspired and is cited by many of the above, but is definitely framed more pro-ai than the derived policies tend to be
  • curl
    • does not explicitly require humans understand contributions, otherwise policy is similar to above policies
  • linux foundation
    • encourages usage, focuses on legal liability, mentions that tooling exists to help automate managing legal liability, does not mention specific tools
  • In progress

Unresolved questions

See the "Moderation guidelines" and "Drawbacks" section for a list of topics that are out of scope.

Rendered

@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Apr 17, 2026
@rustbot
Copy link
Copy Markdown
Collaborator

rustbot commented Apr 17, 2026

r? @jieyouxu

rustbot has assigned @jieyouxu.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

Why was this reviewer chosen?

The reviewer was selected based on:

  • Fallback group: @Mark-Simulacrum, internal-sites
  • @Mark-Simulacrum, internal-sites expanded to Mark-Simulacrum, Urgau, ehuss, jieyouxu
  • Random selection from Mark-Simulacrum, Urgau, ehuss, jieyouxu

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented Apr 17, 2026

@rustbot label T-libs T-compiler T-rustdoc T-bootstrap

@rustbot rustbot added T-bootstrap Team: Bootstrap T-compiler Team: Compiler T-libs Team: Library / libs T-rustdoc Team: rustdoc labels Apr 17, 2026
Comment thread src/policies/llm-usage.md Outdated
## Summary
[summary]: #summary

This document establishes a policy for how LLMs can be used when contributing to `rust-lang/rust`.
Subtrees, submodules, and dependencies from crates.io are not in scope.
Other repositories in the `rust-lang` organization are not in scope.

This policy is intended to live in [Forge](https://forge.rust-lang.org/) as a living document, not as a dead RFC.
It will be linked from `CONTRIBUTING.md` in rust-lang/rust as well as from the rustc- and std-dev-guides.

## Moderation guidelines

This PR is preceded by [an enormous amount of discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/channel/588130-project-llm-policy).
Almost every conceivable angle has been discussed to death;
there have been upwards of 3000 messages, not even counting discussion on GitHub.
We initially doubted whether we could reach consensus at all.

Therefore, we ask to bound the scope of this PR specifically to the policy itself.
In particular, we mark several topics as out of scope below.
We still consider these topics to be important, we simply do not believe this is the right place to discuss them.

No comment on this PR may mention the following topics:

- Long-term social or economic impact of LLMs
- The environmental impact of LLMs
- Anything to do with the copyright status of LLM output
- Moral judgements about people who use LLMs

We have asked the moderation team to help us enforce these rules.

## Feedback guidelines

We are aware that parts of this policy will make some people very unhappy.
As you are reading, we ask you to consider the following.

- Can you think of a *concrete* improvement to the policy that addresses your concern? Consider:
  - Whether your change will make the policy harder to moderate
  - Whether your change will make it harder to come to a consensus
- Does your concern need to be addressed before merging or can it be addressed in a follow-up?
  - Keep in mind the cost of *not* creating a policy.

### If your concern is for yourself or for your team
- What are the *specific* parts of your workflow that will be disrupted?
  - In particular we are *only* interested in workflows involving `rust-lang/rust`.
    Other repositories are not affected by this policy and are therefore not in scope.
- Can you live with the disruption? Is it worth blocking the policy over?

---

Previous versions of this document were discussed on Zulip, and we have made edits in responses to suggestions there.

## Motivation
[motivation]: #motivation

- Many people find LLM-generated code and writing deeply unpleasant to read or review.
- Many people find LLMs to be a significant aid to learning and discovery.
- `rust-lang/rust` is currently dealing with a deluge of low-effort "slop" PRs primarily authored by LLMs.
  - Having *a* policy makes these easier to moderate, without having to take every single instance on a case-by-case basis.

This policy is *not* intended as a debate over whether LLMs are a good or bad idea, nor over the long-term impact of LLMs.
It is only intended to set out the future policy of `rust-lang/rust` itself.

## Drawbacks
[drawbacks]: #drawbacks

- This bans some valid usages of LLMs.
  We intentionally err on the side of banning too much rather than too little in order to make the policy easy to understand and moderate.
- This intentionally does not address the moral, social, and environmental impacts of LLMs.
  These topics have been extensively discussed on Zulip without reaching consensus, but this policy is relevant regardless of the outcome of these discussions.
- This intentionally does not attempt to set a project-wide policy.
  We have attempted to come to a consensus for upwards of a month without significant process.
  We are cutting our losses so we can have *something* rather than adhoc moderation decisions.
- This intentionally does not apply to subtrees of rust-lang/rust.
  We don't have the same moderation issues there, so we don't have time pressure to set a policy in the same way.

## Rationale and alternatives
[rationale-and-alternatives]: #rationale-and-alternatives

- We could create a project-wide policy, rather than scoping it to `rust-lang/rust`.
  This has the advantage that everyone knows what the policy is everywhere, and that it's easy to make things part of the mono-repo at a later date.
  It has the disadvantage that we think it is nigh-impossible to get everyone to agree.
  There are also reasons for teams to have different policies; for example, the standard for correctness is much higher within the compiler than within Clippy.
- We could have a more strict policy that removes the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html) condition.
  This has the advantage that our policy becomes easier to moderate and understand.
  It has the disadvantage that it becomes easy for people to intend to
  follow the policy, but be put in a position where their only choices
  are to either discard the PR altogether, rewrite it from scratch, or
  tell "white lies" about whether an LLM was involved.
- We could have a more strict policy that bans LLMs altogether.
  It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.

## Prior art
[prior-art]: #prior-art

This prior art section is taken almost entirely from [Jane Lusby's summary of her research](rust-lang/leadership-council#273 (comment)),
although we have taken the liberty of moving the Rust project's prior art to the top.
We thank her for her help.

### Rust
- [Moderation team's spam policy](https://github.com/rust-lang/moderation-team/blob/main/policies/spam.md/#fully-or-partially-automated-contribs)
- [Compiler team's "burdensome PRs" policy](rust-lang/compiler-team#893)
### Other organizations
 These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
- full ban
  - [postmarketOS](https://docs.postmarketos.org/policies-and-processes/development/ai-policy.html)
        - also explicitly bans encouraging others to use AI for solving problems related to postmarketOS
        - multi point ethics based rational with citations included
  - [zig](https://ziglang.org/code-of-conduct/)
    - philosophical, cites [Profession (novella)](https://en.wikipedia.org/wiki/Profession_(novella))
    - rooted in concerns around the construction and origins of original thought
  - [servo](https://book.servo.org/contributing/getting-started.html#ai-contributions)
    - more pragmatic, directly lists concerns around ai, fairly concise
  - [qemu](https://www.qemu.org/docs/master/devel/code-provenance.html#use-of-ai-content-generators)
    - pragmatic, focuses on copyright and licensing concerns
    - explicitly allows AI for exploring api, debugging, and other non generative assistance, other policies do not explicitly ban this or mention it in any way
- allowed with supervision, human is ultimately responsible
  - [scipy](https://github.com/scipy/scipy/pull/24583/changes)
    - strict attribution policy including name of model
  - [llvm](https://llvm.org/docs/AIToolPolicy.html)
  - [blender](https://devtalk.blender.org/t/ai-contributions-policy/44202)
  - [linux kernel](https://kernel.org/doc/html/next/process/coding-assistants.html)
    - quite concise but otherwise seems the same as many in this category
  - [mesa](https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/docs/submittingpatches.rst)
    - framed as a contribution policy not an AI policy, AI is listed as a tool that can be used but emphasizes same requirements that author must understand the code they contribute, seems to leave room for partial understanding from new contributors.
        > Understand the code you write at least well enough to be able to explain why your changes are beneficial to the project.
  - [forgejo](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md)
    - bans AI for review, does not explicitly require contributors to understand code generated by ai.
      One could interpret the "accountability for contribution lies with contributor even if AI is used" line as implying this requirement, though their version seems poorly worded imo.
  - [firefox](https://firefox-source-docs.mozilla.org/contributing/ai-coding.html)
  - [ghostty](https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md)
    - pro-AI but views "bad users" as the source of issues with it and the only reason for what ghostty considers a "strict AI policy"
  - [fedora](https://communityblog.fedoraproject.org/council-policy-proposal-policy-on-ai-assisted-contributions/)
    - clearly inspired and is cited by many of the above, but is definitely framed more pro-ai than the derived policies tend to be
- [curl](https://curl.se/dev/contribute.html#on-ai-use-in-curl)
  - does not explicitly require humans understand contributions, otherwise policy is similar to above policies
- [linux foundation](https://www.linuxfoundation.org/legal/generative-ai)
  - encourages usage, focuses on legal liability, mentions that tooling exists to help automate managing legal liability, does not mention specific tools
- In progress
  - NixOS
    - NixOS/nixpkgs#410741

## Unresolved questions
[unresolved-questions]: #unresolved-questions

See the "Moderation guidelines" and "Drawbacks" section for a list of topics that are out of scope.
Copy link
Copy Markdown
Member

@jieyouxu jieyouxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this version, and thanks a ton for working on it. Specifically:

  • It doesn't try to dump entire walls of text, which is unfortunately a good way to be sure nobody reads it. Instead, it gives you concrete examples, and a guiding rule-of-thumb for uncovered scenarios, and acknowledges upfront that it surely cannot be exhaustive.
  • I also like where it points out the nuance and recognizes the uncertainties.
  • I like that it covers both "producers" and "consumers" (with nuance that reviewers can also technically use LLMs in ways that are frustrating to the PR authors!)

I left a few suggestions / nits, but even without them this is still a very good start IMO.

(Will not leave an explicit approval until we establish wider consensus, which likely will take the form of 4-team joint FCP.)

View changes since this review

Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
@ChayimFriedman2
Copy link
Copy Markdown

The links to Zulip are project-private, FWIW.

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented Apr 17, 2026

The links to Zulip are project-private, FWIW.

I'm aware. This PR is targeted towards Rust project members moreso than the broad community.

Copy link
Copy Markdown
Member

@davidtwco davidtwco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with this as an initial policy for the rust-lang/rust repository.

View changes since this review

Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/how-to-start-contributing.md Outdated
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md
Comment on lines +198 to +201
### The meaning of "originally created"

This document uses the phrase "originally created" to mean "text that was generated by an LLM (and then possibly edited by a human)".
No amount of editing can change how it was originally created; how it was generated sets the initial style and it's very hard to change once it's set.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice that the policy does not specifically mention LLM-backed autocompletion anywhere. I suspect this may lead to confusion later. Editors are now willing to fill in entire functions and files (and safety comments, other documentation, etc.), but people seem to mentally bucket this in a different category.

Given the intent of the policy, I'd expect it might want to say something along the lines of:

This includes use of editor autocompletion when that feature is implemented using an LLM (check the documentation). Once you accept a nontrivial autocompletion, the work is "originally created" by an LLM and no amount of editing (other than deleting it entirely) can remove that.

I'm not sure where you'd want to put this.

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a different conclusion: "autocompletion" usually doesn't meet the bar of originality and is thus usually allowed.

If it's "nontrivial", sure. But at least in my experience, I almost never get those.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it's fair to clarify that the main bar is whether any actual functional code was written, not just syntax. The example I like to use is implementing Iterator for &mut I; it's a massive wall of text, but it's not exactly intellectually thrilling work, and honestly it should probably even be autocompletable by R-A and not even require an LLM to figure that out.

"Threshold of originality" is an okay thing to defer to since it's well-understood for copyright, but ultimately however that gets worded, I think it's where we should be aiming for. Anything else isn't worth litigating.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added comments elsewhere suggesting we should avoid relying on copyright terminology such as "threshold of originality." The concern here is that it risks a circular conclusion by treating all LLM-generated contributions as trivial by definition.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that's the case since all of this is going to be relative to moderator arbitration anyway, and the mods know what the rule actually means. But we can discuss that particular point in the other thread you opened about it.

For here, I think that the point about "auto-completed" code, from the context of code that is mostly syntax boilerplate and thus would be exactly the same regardless of who wrote it, is reasonable to allow and not litigate regardless of the tool being used to do it.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a different conclusion: "autocompletion" usually doesn't meet the bar of originality and is thus usually allowed.

OK. If that's our interpretation, I think we'd do well to write it down. It's not enough for us to know where the bar is. We want all contributors to feel secure that they know what they can do without violating our policies. Asking people who want to comply to make a quasi-legal determination over the threshold of originality, to come to a conclusion themselves about whether using LLM-backed autocomplete is allowed, is, I think, too much.

Let's make that conclusion for people. If we think LLM-backed autocompletion is OK, let's say that. If there are exceptions we can articulate plainly, let's say those too.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also worth pointing out that the default is almost certainly going to be lenience whenever the rules are unclear, even though I agree we should be explicit about that and let people know. There's almost no situation where an honest mistake is going to be treated with anything other than just explaining what went wrong and letting someone fix it.

I dunno what the best way to handle this is, but maybe just a general mention of this somewhere in the policy should suffice.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I've seen AI autocomplete fill out complete tests with questionable quality while trying this out in my personal projects. The models used there are typically very fast and cheap, so the quality is not great.

I do think it's helpful to explicitly clarify a stance here, even if I'm pretty whatever on what the actual policy should be in this particular case.

Comment thread src/policies/llm-usage.md
Comment on lines +218 to +223
- A simple majority of members of teams using rust-lang/rust.
- No outstanding concerns from those members.

This policy can be dissolved in a few ways:

- An accepted FCP by teams using rust-lang/rust.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- A simple majority of members of teams using rust-lang/rust.
- No outstanding concerns from those members.
This policy can be dissolved in a few ways:
- An accepted FCP by teams using rust-lang/rust.
- A simple majority of members of teams using rust-lang/rust and that have ratified the policy.
- No outstanding concerns from those members.
This policy can be dissolved in a few ways:
- An accepted FCP by teams using rust-lang/rust and that have ratified the policy.

Something like this might help in being more clear about which teams need to be involved in these later actions.

View changes since the review

@The-personified-devil
Copy link
Copy Markdown

As a Rust user I very much appreciate this step in the right direction and just wanted to make sure that it's not just negative voices being heard, even tho I don't have much to say on policy specifics.

Tho I sincerely hope that those in this thread mentioning that Rust shouldn't exist if it can't be fully ethical apply this same moral rigorousness to most other things in their lives.

@SoNick

This comment was marked as low quality.

Comment thread src/policies/llm-usage.md
- Using machine-translation (e.g. Google Translate) from your native language without posting your original message.
Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation.
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- "Trivial" code changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html).
Copy link
Copy Markdown

@xtqqczze xtqqczze May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think "threshold of originality" is the right concept here. That standard concerns whether a work reflects the author’s own free and creative choices, which cannot meaningfully apply to LLM output. If the intent is to exclude mechanically trivial changes, the policy should describe that directly instead of relying on copyright terminology.

View changes since the review

Copy link
Copy Markdown

@clarfonthey clarfonthey May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree that this is the case, since I've pretty much always interpreted "threshold of originality" to additionally mean that something is unique and substantial enough to be copyrightable.

But would be interested if you have any particular examples of it being used in this context, since if this is a particularly decided matter and that's just not clear, it would be helpful to know.

Side note: I thought that the Foundation's policy used the term, but no, what they actually use is this:

*Materially, in this case, refers to incidences where the AI fundamentally generated, structured, and/or substantively influenced the output, beyond just editorial.

I feel like at least one of the policy examples I sifted through did use the term, but I would be more willing to assume that the legal definition was used correctly from someone like the Foundation who almost certainly had legal counsel give it a pass before publishing. Since they didn't, you might be right, just, I don't know what is the true case here.

Edit: a quick ctrl-F for "originality" in all the policies that were mentioned turns up nothing, so, maybe this was the only place where it was mentioned.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The issue is that the proposed policy ties "trivial" changes to the "threshold of originality." If that threshold is understood as the level of originality required for copyright protection, then all LLM-generated output would automatically fall below it, not because the change is necessarily boilerplate or insignificant, but because LLM output is categorically non-copyrightable.


Given that, I think it would be better for the RFC to avoid relying on copyright terminology entirely, and instead define "trivial" directly in terms of the practical distinction it is trying to capture.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, that actually does sound like a compelling argument. I would be okay using different terms for that reason.

Comment thread src/policies/llm-usage.md
Your contributions are your responsibility; you cannot place any blame on an LLM.
- ℹ️ This includes when asking people to address review comments originally created by an LLM. See "review bots" under ⚠️ above.

### The meaning of "originally created"
Copy link
Copy Markdown

@xtqqczze xtqqczze May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### The meaning of "originally created"
### The meaning of "originally generated"

I am still not comfortable with the use of "originally created" throughout this document, as it seems to imply that LLM output can itself be a creative work or meet a threshold of originality.

I think "originally generated" would be more precise and neutral terminology, and would avoid that implication.

View changes since the review

@reillypascal
Copy link
Copy Markdown

LLMs are non-deterministic; the issue of “hallucinations” is not solved, and it does not look like it will be solved anytime soon, if ever. If we aren't trusting humans to be perfectly vigilant for manual memory management, it doesn't make sense to expect them to catch every single machine-generated error either.

@clarfonthey
Copy link
Copy Markdown

clarfonthey commented May 16, 2026

Just wanna add some additional context to the folks who are coming here because of the various places that have uncritically shared quotes from this thread without context:

This thread was meant to be mostly kept among team members to get a policy out to fix the status quo of nobody knowing what to do with LLM contributions. You might think that the default without a policy is to accept all LLM contributions, or reject all LLM contributions, but no, the answer is the third option that everyone hates: having absolutely no idea what to do with anything.

(If you think there are other problems: I agree! But this thread isn't about them.)

We all agree that objective slop is bad, but there are so many things that are explicitly not slop, and there have also been several cases where contributors don't necessarily know what they're doing and we want to help. The current situation is very confusing because the popularity of LLMs and the lack of communication on them has amplified the difficulty of working with new contributors, and while there are some cases that are extremely easy to tell out as bad, there are a lot of cases where we have absolutely no idea what to do and it's very difficult to deal with.

Just want to let everyone know that, hey, if you're worried about things, this is not the place to discuss it.

I've personally objected to some of the wording that people have been pointing out as bad, ultimately, it was not meant to be shared so widely outside of the project. If you got here from one of the various places doing that, you should maybe talk to the people who shared it and call them out on it.

If you don't know what the situation is, please feel free to ask about it in any number of places that is not here. Similarly, simply stating your opinions on LLMs here is entirely unhelpful and will not affect anything. I hesitate to direct everyone to alternative avenues because I think that most of the people participating were not actually interested in the boring stuff that's actually going on and just wanted to say something to stop something they thought was happening, but actually isn't.

Many team members are happy to individually discuss things outside of this thread to help clarify things to people who don't know the situation. And we would very strongly appreciate it if others who have made themselves aware of the situation would help out too. The uncritical sharing of quotes from this thread out of context resulted in some truly horrendous stuff happening yesterday and you should know that the internet seeks to find the lowest common denominator, so, maybe try and promote a critical analysis of the situation instead. Because even if you're not the kind of person who would do something awful because of a poor understanding of the situation, someone else might be, and we want to foster a community that explicitly rejects that.

Thanks. Sorry this is so vague; I feel like adding additional details would just hurt the situation. Those who know where to find out more can, but this isn't the place.

@Diggsey
Copy link
Copy Markdown

Diggsey commented May 16, 2026

Obvious. Those ethical concerns come from people that have strong ethical values, and share deep concerns of the inequality created and amplifyed by this "technologies" worldwide.

Dont talk about the elephant in the room doesn't make it dissappear.

I would prefer Rust to dissapear if is going to allow non working and non functional code, that will plague it with thousands of bugs to be committed into it.

There are people with ethical concerns over the use of AI and there are people with concerns about the practicality of AI use (eg. code quality, maintainer burden, etc.)

I don't think anyone wants to see the quality of Rust code decline - we can all agree on that, but a policy that satisfies people concerned about the practicality likely won't be sufficient to satisfy the people with ethical concerns.

@clarfonthey
Copy link
Copy Markdown

I am particularly tired about arguments simply doubting whether one side or the other can achieve consensus on any particular flavour of policy. Even if the discussion is difficult, I trust that we have a team of adults who can figure something out. For example, I'm certainly the one arguing for the most restrictions on LLM usage and I fully support this policy for a number of reasons, particularly the ones I said most recently about how the team needs guidance on what to do with LLM contributions and any guidance, even if it allows many LLM contributions, is better than confusion.

While I think that it's very tempting to continue to argue the merit of whether this policy should include ethics or not, the policy is in FCP right now and I would rather go with what it has than try to change it. New policy can always be proposed to address shortcomings, and other policy discussions are happening concurrently anyway, including ones that do involve a discussion of ethics.

So, I think we really should drop this point and focus on just making sure the guidance is clear. While we have mostly settled on the shape of the policy, this time is for making sure that all the weird minutae are ironed out.

@astrojuanlu
Copy link
Copy Markdown

astrojuanlu commented May 17, 2026

[...] Would you rather have no Rust project than a Rust project that uses LLMs?

[...] I sincerely hope that those in this thread mentioning that Rust shouldn't exist if it can't be fully ethical apply this same moral rigorousness to most other things in their lives.

As far as I understand, Rust has been thriving for more than a decade before LLMs were introduced.

Rust was already both the most loved and most wanted language in the 2022 Stack Overflow Developer Survey.

Because of that, I can only interpret the question above either as a false dichotomy or as an expression of deep fear within the Rust leadership and the community that the Rust language could disappear if a full ban were enacted.

If it is the former, I think the argument should be outright rejected. And if it's the latter, I think the community deserves a better explanation.

I understand the pressure from the moderation perspective, the urgency to have some policy in place, and the fact that this has apparently already been "discussed to death" internally. But if only crumbs of debate are going to be allowed on this GitHub thread, then at the very least people should make an effort to avoid fallacious arguments.

@ChayimFriedman2
Copy link
Copy Markdown

@astrojuanlu Rust won't die from not being able to use the productivity gains of LLMs (if they exist). But Rust can definitely die from the split inside the project about LLM usage, or from maintainer burnout due to many slop PRs. These are problems that didn't exist prior to LLMs.

Comment thread src/policies/llm-usage.md
- ℹ️ This does not apply to public comments. See "review bots" under ⚠️ below.
- Writing dev-tools for your own personal use using an LLM.
- Using an LLM to generate possible solutions to an issue, learning from them, and then writing something from scratch in your own style.
- Using an LLM in the creation of experimental code changes that are not meant to be reviewed and will never be merged but must live as draft PRs on `rust-lang/rust` for tooling reasons, such as to run crater or perf.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett May 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to be moved to "⚠️ Allowed with caveats", because it needs to disclose that an LLM was used. Sometimes, people take draft PRs and use them as a basis for some other work, or revive them if the original author has abandoned them. Without disclosure, that would amount to unknowingly working from LLM output.

(I agree with the premise that something amounting to "I just want to run perf on this" is...not okay, exactly, but reasonable for us to accept as a compromise. My only concern here is requiring disclosure.)

@rfcbot concern require-disclosure-even-for-draft-prs

View changes since the review

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I consider this as beyond the scope of the teams that are FCPing this. These teams do not exclusively own the rust-lang/rust Project monorepo. They own certain code within the repository and certain activities that take place within the space, e.g., compiler reviews. Something that goes up in rust-lang/rust, for tooling reasons, that does not ask for compiler (or libs, etc.) review and that is not intended to be merged is outside of this scope. That would be a Project matter.

Concern about what someone else might later do is valid (though rather unlikely for the use cases I have in mind), but that applies equally to what someone else might later do with code posted anywhere1 — this policy creates compliance challenges when building on other people's code.

(In fact, I'd suggest it's worth spending some words in the policy discussing this and what we expect people to do. E.g., if we expect people to proactively verify that any code they're building on (e.g., from someone else's fork) is free of LLM generation before they submit the combined result for review, then probably we'd do well to say that.)

Footnotes

  1. The obvious retort is, "yes, but it's more likely if it's in rust-lang/rust". Agreed. I'd consider that a good argument to make on a Project-level policy.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps the compromise here is to separate these out by intention (and that was the goal behind the various qualifiers in this item — it's not speaking generally about draft PRs). If you're putting up a draft PR that you do intend to later make a not-draft PR and ask for compiler (or libs, etc.) review, then maybe it'd be better to shift the compliance to the earliest feasible point. Such PRs are distinct due to this intention.

Of course, there still may be practical problems with adding requirements on draft PRs. Often people put these up without any PR description at all. It's perhaps in tension with the idea that a draft PR is "not yet ready for review" to suggest that people could have already violated our policy at that point by the PR description being incomplete — maybe it was in draft because the author wanted to read this policy first and work out wording for a compliant disclosure.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, maybe "suggested" disclosure for them? Empty PR descriptions in drafts is a very real thing, but it would be nice to have disclosure by default for them.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whether it is "intended" to be merged is beside the point. If a PR is posted to rust-lang/rust, draft or no, people may reasonably consider building upon it. It is not an unreasonable ask to say that LLM usage needs to be disclosed, in order to give a prospective user of that code the information they'd need in order to do so.

Otherwise, a subsequent user is in the position of potentially submitting a non-draft PR containing LLM-written code, which would be against the policy. It would be a downgrade from existing practice if they first have to go "did you use an LLM for this?"; part of the point of having this policy is to not have to do that.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it’s reasonable if they have to ask the original author whether they used an LLM. it’s not so different from due diligence to make sure that the license from another piece of code is compatible, and i don’t think it will generate mistrust in the same way as investigating every incoming PR.

if the author is unresponsive or we otherwise can’t determine whether it’s LLM authored, we’d assume it is and it would have the same 5 rules as other LLM PRs.

Comment thread src/policies/llm-usage.md
Comment on lines +61 to +62
In general, new contributors will be scrutinized more heavily than existing contributors,
since they haven't yet established trust with their reviewers.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett May 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In general, new contributors will be scrutinized more heavily than existing contributors,
since they haven't yet established trust with their reviewers.
In general, new contributors will be scrutinized more heavily than existing contributors,
since they haven't yet established trust with their reviewers.
All contributors are subject to the same uniform policy; this is merely an observation that evaluations based on this policy incorporate an element of trust.

I think it's important to make it clear that this isn't a different set of rules for established project members.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
In general, new contributors will be scrutinized more heavily than existing contributors,
since they haven't yet established trust with their reviewers.
If you are a new contributor, you should expect to be scrutinized more heavily than existing contributors,
since you haven't yet established trust with yourr reviewers.

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alice-i-cecile 👍 for that change as well (with s/yourr/your/), which makes this come across even more diplomatically. I don't think it removes the need to also add the note I added above, but I think it's a great improvement.

Comment thread src/policies/llm-usage.md
Comment on lines +208 to +211
If you have a use of LLMs in mind that isn't on this list, judge it in the spirit of this overview:
- Using an LLM for your own personal use is likely allowed ✅
- Showing LLM output to another human without solicitation is likely banned ❌
- Making a decision based on LLM output requires disclosure ⚠️
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett May 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
If you have a use of LLMs in mind that isn't on this list, judge it in the spirit of this overview:
- Using an LLM for your own personal use is likely allowed ✅
- Showing LLM output to another human without solicitation is likely banned ❌
- Making a decision based on LLM output requires disclosure ⚠️
If you have a use of LLMs in mind that isn't on this list, talk to people about it, and judge it in the spirit of this overview:
- Using an LLM for your own personal use is likely allowed ✅
- Showing LLM output to another human without solicitation is likely banned ❌
- Making a decision based on LLM output requires disclosure ⚠️

I think it's worth encouraging conversation here, not one-person decisions.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strongly agree here :) Doesn't need to be formal, but other inputs and conversations is very healthy.

Comment thread src/policies/llm-usage.md
Comment on lines +218 to +219
- A simple majority of members of teams using rust-lang/rust.
- No outstanding concerns from those members.
Copy link
Copy Markdown
Member

@joshtriplett joshtriplett May 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- A simple majority of members of teams using rust-lang/rust.
- No outstanding concerns from those members.
- An accepted FCP by the teams using rust-lang/rust, with no outstanding concerns from members of those teams.

We don't typically do things by simple majority or voting, and I don't think we should start now.

(I say this despite the fact that I expect it would be easier to ratchet this policy stricter via simple majority.)

View changes since the review

Copy link
Copy Markdown
Contributor

@traviscross traviscross May 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I say this despite the fact that I expect it would be easier to ratchet this policy stricter via simple majority.)

If it's any consolation, I conversely expect that it will be extremely challenging to ever relax this policy due to the fact that any such change can be blocked by a single concern from any one of dozens of people.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree with TC here. I'm not convinced that consensus-based decision making is the best approach for this sort of sensitive policy. It is far too easy for this to deadlock, leaving a very bad status quo or endless exhausting discussion now or in the future.

No decision still produces an outcome, and it's important to be mindful about that when choosing governance mechanisms.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to hear from @jyn514 on what this custom mechanism is trying to achieve.

Copy link
Copy Markdown
Member Author

@jyn514 jyn514 May 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i actually want this policy to not be too hard to change. that’s the point of the experiment clause — which isn’t meant to be permanent — and the reason i lowered the requirement from N-2 to a simple majority.

i thought i’d gotten this from https://forge.rust-lang.org/governance/council.html#the-consent-decision-making-process, but apparently even the lower requirement of “consent” is still N-2.

i’m ok with lowering this to the same level as an MCP (a single second, plus no objections during the 10 period following). i still think it’s important to treat concerns as blocking.

i would also be ok with lowering the conditions for repealing policy to the same conditions (a second + no objections), as long as there’s already a project wide policy in place.

@joshtriplett
Copy link
Copy Markdown
Member

Posting at top-level because rfcbot didn't respond and I don't know if rfcbot registers concerns posted in non-top-level comments.

#1040 (comment)

@rfcbot concern require-disclosure-even-for-draft-prs

Comment thread src/policies/llm-usage.md
All uses under "⚠️ Allowed with caveats" **must** disclose that an LLM was used.

#### Experiment: LLM-created code changes
Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally created by an LLM are allowed, with disclosure.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Solicited, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally created by an LLM are allowed, with disclosure.
Agreed-upon, non-critical, high-quality, well-tested, and well-reviewed code changes that are originally created by an LLM are allowed, with disclosure.

It's defined below, so it's OK either way as a policy matter, but we're, I think, looking for a different word than solicited. That word suggests that the reviewer initiated the request. Alice reached out to Bob, and said, "Bob, I'd like to you to use an LLM to prepare and submit change X to me."

What we actually mean is that the reviewer and the author talked ahead of time and agreed upon doing this. So agreed-upon is the term that comes to mind.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I prefer this wording too :)

@cloudskater

This comment was marked as low quality.

@cloudskater

This comment was marked as low quality.

@apiraino
Copy link
Copy Markdown
Contributor

The world is not black and white, but the truth is.

@cloudskater a message from a moderator.

I appreciate that lately you're being passionate about this topic and tried to talk a number of projects into banning AI contributions.

But this is not what is happening here. Here, we try to reconcile a wide spectrum of opinions living inside the Rust project. We are well aware that we won't make everyone happy but we hope to make everyone a bit unhappy but open to an acceptable compromise.

I kindly ask you to consider if your conduct is contributing to the conversation in a constructive way.

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

I-council-nominated Nominated for discussion during a council meeting. needs-fcp This change is insta-stable, so needs a completed FCP to proceed. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-bootstrap Team: Bootstrap T-compiler Team: Compiler T-libs Team: Library / libs T-rustdoc Team: rustdoc T-types Team: Types

Projects

None yet

Development

Successfully merging this pull request may close these issues.