Hi Modular community,
I’ve been contributing to the stdlib using AI-assisted workflows and wanted to share some thoughts on the AI Tool Use Policy. I think the intent is great, but I believe some rules are optimized for a past that’s already behind us.
I think it’s clear that all code will be AI-written. The question is how we handle human responsibility.
We’re not heading toward a world where AI assists humans, we’re heading toward one where AI writes 100% of the code and humans just direct, review, and try to own it. The sooner we design contribution workflows that aim at that reality, the better positioned the project will be.
I think policies that treat AI generation as an exception will need to change anyway, so we might as well start adapting now
Draft PRs could be the right boundary for agentic workflows.
We’ve moved from Copilot autocomplete to autonomous agents that can run in parallel, open branches, write tests, and iterate on feedback. The right human checkpoint in this model isn’t “did a human type the code”, it’s “did a human review and own this before it went to reviewers.”
A practical policy update would be: allow agents to autonomously create draft PRs, and require the human author to self-review and manually mark them as ready for review. That’s a clear, verifiable, enforceable boundary that doesn’t require policing how the code was produced.
Prohibiting AI-written PR descriptions seems to me the wrong lever
The current policy asks contributors to write PR descriptions themselves, on the theory that it forces self-review. But this creates a weird workflow: AI writes the code, human copies the AI’s summary and pastes it adapting it. That’s not more human, it’s just slower. A better option IMO is to invest in prompts and tooling that produce excellent PR descriptions (e.g. not too detailed), and then have the human review and refine the description before marking the PR as ready. The draft PR stage is where that review happens.
Proposal
I suggest updating the AI tool use policy so:
- It allows agents to create and update draft PRs.
- The human author would review the diff and mark the PR as ready for review, which becomes the accountability checkpoint.
- The
Assisted-by: AIlabel would still provide transparency.
This keeps accountability where it matters while removing unnecessary friction.
Curious whether others are running into the same friction and how you’re handling it