I have recently started two discussions regarding adding an AI Code Contribution policy to projects, namely Tangled and Leaflet, with the goal of convincing them to prohibit the use of tools such as Cursor, GitHub Copilot or Zed's LLM Code Auto-Completion features.

In doing so, I've been met with many of the same arguments over and over again, stating concerns like:

If you prohibit AI Contributions, who gets to define what that means?
If you prohibit AI Contributions, people will try doing it anyways!
You can't prohibit AI contributions, you don't know their value!

And to all of them I say: Thank you, but no thanks.

We define what is and isn't "AI-assisted". We don't need there to be consensus. We stand up for when we accidentally merge AI-assisted code. And we recognize that AI-generated code does more harm than good for us.


Background

I work on wafrn in my free time, am a maintainer and advisor on the project, and moderate the main app.wafrn.net instance. A while ago, we adopted a strict no-AI policy regarding contributions to the project.

We made this because we have concerns around licensing, around code quality, about our own ability to catch structural issues that aren't immediately apparent, but often emitted by "coding agents" and similar tools. And, of course, we value the human craft of creating software.

Since then, we have had:

  • Zero AI-Generated Pull Requests

  • Zero AI-Generated code making it into Wafrn

  • One user potentially generating issues with the help of an LLM. (see here)

When we evaluate a PR and we feel that something is off, we ask contributors to confirm to us whether or not they've used AI in any part of the creation process, whether that be assets, code, documentation, the pull-request description, code-analysis, review, etc.

If they confirm that they did, we reject the PR.

If they confirm that they did not, we discuss whether or not we want to still accept the contribution based on whether or not our own judgement calls this as being AI.

And, if we catch someone generating code with AI and passing our code-review process, we rewrite the relevant code and exclude them from contributing.

This, is how you deal with AI-generated code in software projects.

It's not about having a 100% accurate detection rate, it's about trust in your contributors and in the maintainers, and we don't want to appeal to, and build trust with those that use these tools. We want to build trust with those that reject these tools, like we do, and as such we work on a best-effort basis to reject AI-generated media, text and code.


Of course, this is anecdotal evidence, but if you want more you can look at https://elementary.io. The elementaryOS project has also adopted a strict No-AI policy, and has been fine. No massive amounts of fake contributions or anything of that nature.

Another example, that is largely different, is cURL. cURL recently shut down their bug bounty program because too many AI-Generated issues that were inaccurate flooded the tracker. Daniel is a bit more optimistic regarding AI than we are, but even he saw that there was serious issues and decided to just say no to grifters using AI to try and make money off of him.

We encourage all of you to do the same. Out of principle, out of having ethical and moral standards in your projects. If you're proud of your project, don't jeopardize it by introducing potentially copyrighted code from other projects, that are statistically just the average, and will alienate many of your existing users which are just tired of seeing this technology everywhere they go. Yes, even if you don't have an AI feature in your product, when they hear about this, many of your users will be pissed.

Don't believe me? Jenson Huang, CEO of Nvidia himself is complaining that too many people dislike AI. People will leave your product over these concerns.

It's not just about AI

Another thing I see often, is comments that stem from the idea that Contribution Guidelines "cannot be enforced" and it is a "futile effort".

I mean, barring the evidence for the contrary above, I don't see how it's supposed to work according to these people. If you have any other contribution guideline, and some people ignore the guidelines and straight up go against them, do you let them through because it's "futile to enforce"?

No!

You go out there and tell them they broke the guidelines, and their contribution isn't welcome. The same goes for AI-generated code. For any rule there will always be people trying to bypass them, and people that will complain, but that doesn't mean you should stop enforcing your rules.