Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@Uzlopak
Copy link
Contributor

@Uzlopak Uzlopak commented Oct 10, 2025

Fixes #6236 ?

This PR improves the somewhat informal contribution guideline to a full blown contribution policy.

Ironically I used Github Copilot, Claude.AI and ChatGPT to improve the english wording.

It is regulating the AI use as permissible but puts the responsibility for the quality in to the hand of the contributor. Collaborators get reasonable tools into their hands and also the right to investigate further if the AI use is suspected.

Also, to be honest, writing just "AI slop", what I sometimes use to do, could be provoking in edge cases, where the contributor is thinking that their code is "good".

This PR improves the somewhat informal contribution guideline to a full blown contribution policy.

Ironically I used Github Copilot, Claude.AI and ChatGPT to improve the english wording.

It is regulating the AI use as permissible but puts the responsibility for the quality in to the hand of the contributor. Collaborators get reasonable tools into their hands and also the right to investigate further if the AI use is suspected.

Also, to be honest, writing just "AI slop", what I sometimes use to do, could be provoking in edge cases, where the contributor is thinking that their code is "good". So the reference to the code of conduct is just for clarification. I added some text templates for collaborators.

Also I added the for the clarificiation, that we can lock discussions if they lead to nowhere. We are talking about the fact that we are working against text generators. So we have always the issue, that AI users can write long texts while we could be forced to be "fair" and do the "right" thing and answer proper texts? This inbalance is solved in our interest to have the right to not engage in the potential discussion.

Lead Maintainers are having the right to override the assessments and decisions. So we have some kind of corrective instance.

I avoided any discretion and everything is designed as right of action.. In german administrative law this is called Ermessen and Anspruch.
So:
A Lead maintainer can always override any decision or assessment of a "normal" collaborator. A collaborator can always request disclosure about used AI tools from the contributor. There is no need to provide any "deep" reasoning or deep analysis to be provided why a collaborator or lead maintainer does this or that.

But still collaborators are encouraged to discuss with the contributor about the case. Nobody is forced to use the templates.

Checklist

@Uzlopak Uzlopak changed the title chore: improv contribution policy, regulate AI usage chore: improve contribution policy, regulate AI usage Oct 10, 2025
@Uzlopak Uzlopak requested review from a team October 10, 2025 16:30
Copy link
Member

@jean-michelet jean-michelet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don’t have strong opinions about setting a formal policy on this. My guess is that AI agent and people that send PR with BS code and docs update don't ready policies.

The point I’d also make is that if we can only suspect AI-generated code, without any solid proof (since a contributor could simply deny it), then it’s probably not worth regulating. Instead, we can just ping other contributors to propose to close PRs that are clearly low-quality: full of emojis, nonsensical, or showing erratic behavior across commits.

I am not sure either about mentioning legal stuff, like we say in France:

You shouldn’t hand someone the stick to beat you with.

Others may have more interesting feedback.

Copy link
Member

@mcollina mcollina left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's ok in sentiment but it should be massively shortened it up

Uzlopak and others added 2 commits October 10, 2025 22:33
@jsumners
Copy link
Member

Ironically I used Github Copilot, Claude.AI and ChatGPT to improve the english wording.

So your desire is that we review probabilistic text that is supposed to limit contributions by probabilistic tools?

I linked at least 2 existing policies, used by well known OSS projects, which are much shorter. I'd rather adopt such a policy. Also, I think #6236 is blocked by waiting on a decision from the foundation. Is that correct @mcollina?

@mcollina
Copy link
Member

The sentiment is:

  • AI-assisted development is allowed
  • AI-assisted development should be disclosed
  • Review everything yourself, you are responsible for it
  • Be mindful with maintainers and avoid AI slop in comments and code, as it will be ruthlessly moderated

Copy link
Contributor Author

@Uzlopak Uzlopak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jsumners

You linked in #6236 two policies and when you posted them I checked them and both did not convince me for various reasons.

A good policy has to be clear in wording and easy to understand. It needs to define the scope, it needs to define rules and standards and it needs to define which sanctions violators are facing and keep some basic principles, like the
principle of proportionality.

The qemu one is inacceptable for various reasons:

  • code generation by AI is not permissible at all. just to later state, that it will be considered in case by case decisions
  • there is no clarification regarding the DCO, just that the qemu project is not accepting legal risks
  • it is impossible for the contributor to demonstrate clarity of the license or what the training model is based on, and so on and so on.

So this means, you can only use models which you trained yourself? Who is gonna do this?

As matteo stated we consider AI use permissible, so qemu is not useful at all.

The ghostty one has a good start, but fails on the sanction side. It gets emotional at the end and explains, that the behavior of using AI tools without disclosure is considered rude. Also the wording is not clear.

So what is the consequence? Its not clear.

I commented my reasoning for each point of the AI usage policy. It got kind of excessive, but I hope it clarifies, why I wrote it this way and not differently.

Comment on lines +58 to +59
It is permissible to use AI tools to assist in writing code, documentation, or
other content for the Fastify project, provided that:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a good introduction to the topic as it defines the use of AI tools as allowed under the following conditions. Maybe "provided that:" is not good expression, because also sanctions are mentioned in the extensive list.

A good policy starts if necessary with defining terms. I consider that AI tools is not needed to be defined, as it is nowadays kind of trivial.

Comment on lines +61 to +63
1. The contributor reviews and verifies the output of the AI tool for accuracy,
security, and compliance with the project standards, especially code style,
tests, and code quality itself.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good policies start with a duty for the individual, who are affected by the policy.

E.g. § 1 StVO, the Street Traffic Order of Germany, defines, that participanting at the road traffic (by pedestrians, cyclists, car drivers etc..) requires constant caution and mutual consideration. And alot of cases at court are just solved by this § 1. If the behavior of a participant is considered by the judges as reckless => case lost.

So any violation of the AI Usage policy will be based on this first rule.

Also it does not give any obligation to the contributor to be legally responsible for the provided code. Thats something what copilot was suggesting me, when I was typing the mardown in vscode. If we accept any code, and here I agree with qemu, we will make this code as our own. Thats it.

Comment on lines +64 to +66
1. The contributor clearly documents the significant use of AI tools in the
commit message or pull request description, including the name of the tool
used and a brief description of how it was used.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A simple documentation is not much asked from the contributor. Also it says, that significance is important. We could also consider to use the wording "non insignificant", which in german legalese means something between insignificant and significant.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Isn't stating the use and naming the tool problematic in terms of copyright?
  • How it was used?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we care about this info?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not every use of AI is relevant.

  • If I write the code myself and then I write a text for the pull_request comment, then the text for the pull_request is not part of the code itself. So if I improve the text of the pull request comment with AI, it is irrelevant for the corresponding code.
  • AI can not have copyright. But the code which was used for training is.
  • Naming AI tools is not a problem of copyright. Maybe you refer to trademarks? This is not applicable because we are not using the trademark as is. Like we write that fastify is tested on Microsofrt Windows, doesnt mean that we are using Microsoft Windows as trademark
  • naming the tool is somewhat relevant. Imagine tomorrow somebody uses some not well known LLM. THe contributor names it, and we are empowered to investigate further if it is maybe problematic AI.

yeah, how it was used can be better phrased. "and a brief description of the affected code." Something like this.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree in principle, but what if they hide it? Or even worse, make some effort to hide it?

We have no good way to even discover a violation of this clause absent obvious cases.

And honestly, as long as

  1. The contributor reviews and verifies the output of the AI tool for accuracy,
    security, and compliance with the project standards, especially code style,
    tests, and code quality itself.

and other clauses are respected, I might not even want to know where the code is coming from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gurgunday
It is what it is. Nobody expects from us magic and to assess AI use or not use with 100% correctness. But we should take care of due dilligence. And this is it about.

Comment on lines +67 to +68
1. The contributor ensures that the use of AI tools does not violate any
licensing or copyright restrictions.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"to ensure" does not mean that the contributor should investigate what data the models are trained on. It just means, that the contributor implicitly states by providing the PR that to his/her best knowledge no violation of license or copyright exists.

Also copilot for github enterprises can use code in the repos of the enterprise to be trained on. Also there is i think a setting, which says, that only OSS with unproblematic license can be used as a base.

So this means for us: If we consider the code potentially violating licensing or copyright restrictions, we just need to state that, and the contributor has to explain it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the contributor implicitly states by providing the PR that to his/her best knowledge no violation of license or copyright exists.

Isn't this redundant with general copyright policy? AI or not, users should not violate copyright.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, but I am sure that there are some people who feed first the AI with copyrighted code and then let the LLM generate code based on the copyrighted material.

We could regarding CoC and DCO refer this also just shortly at the beginning.

Comment on lines +69 to +70
1. The contributor bears the burden of proof if undocumented AI use is
suspected.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We put the burden of proof on the contributor.

This is just a counter measurement, for the fact, that the contributor flat out denies the use of AI, and maybe even argues, that it is insulting to accuse of undocumented use of AI. And suddenly you have a CoC violation...

Proving something which does not exist is impossible. So what means proof?
Maybe a simple affidavit as a comment is enough, "I assure you, that I wrote all the code myself."

But as I wrote above: Collaborators should not be forced to prove that AI was used. If the burden of proof is unclear or on our side, we can not sanction anything, because grey areas can not be proven by us.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would simply say that maintenance maintainers can close a PR if they suspect AI usage, without justification.
That's how it will play out in reality.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In italy we call this "prova diabolica" (diabolic proof) because it is impossible to proof that you did not do something.

I agree with the @jean-michelet comment

Copy link
Contributor Author

@Uzlopak Uzlopak Oct 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Its actually the point of "liberte" of the french motto. If the state wants to do something they need a law. If they cant name a law, then it does not exist. You as a citizen dont need to find a proof, that you are doing some not illegal. Or short: Everything is allowed, till it is forbidden.

We can remove this, but again: Somebody could then state, that we have to prove that AI was used or else the person finds the mentioning of AI use potentially insulting and opens us for legal actions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

else the person finds the mentioning of AI use potentially insulting and opens us for legal actions.

😮 is this ever happened?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is 100% true. We have absolutely zero legal obligation to anyone for anything in regard to the source code, documentation, or management of the project.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jsumners You have a very US centric view. Dont forget that most maintainers are living in Europe. article 10 of the European Human Rights convention allows penalizing insults. Doesnt mean that it has to be persecuted via criminal law at criminal courts, it can also be sued at civil courts.

The MIT license is only covering the sourcecode. But not the interaction on this platform. And if we claim that everybody is invited to participate then we open us legally to get into a contractual binding. In common law you need a consideration to make legally binding contract. In continental law you can have contracts without consideration. You can even have pre contracts(Vorvertrag) which binds you with a legal obligation. Like you go to the bathroom in a shopping mall and slip and fall, and till then you did not bought anything, so there is no contract, but still there is culpa in contrahendo, so slip and fall is covered by obligations law.

So it does not mean that you have a blanko-check to potentially insult contributors on this platform. If I post in a repo and get insulted than If I falsely accuse somebody falsely, then the person can take legal actions against me and sue me for retracting and to not repeat the claim.

We have here german lawyers (Ralf Höcker, Joachim Steinhöfel) who sue twitter, facebook and co regularly despite their argument that the US freedom of speech is allowing more that what we have in article 10 EHRC or article 5 german Grundgesetz.

So we are not in wild west , and we are not dictators.

Copy link
Contributor

@ljharb ljharb Oct 23, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is just as true in the EU - the CRA applies to stewards, but not open source maintainers.

That you can be sued for specific things you say doesn't change that maintainers can do literally anything they want with their package, including closing a PR without giving a reason.

If participating in open source with the rest of the world is legally difficult in Germany or the EU due to the issues you describe, then that is something in those laws that may need to change.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ljharb

Of course we can close PRs without comments. And of course we can behave like dictators.

But this is contradicting to the CoC! Closing a PR without any feedback can be deemed unwelcoming. There are other aspects too. Like human dignity means that I dont behave in a way that i treat the person as a pure object of my actions (Immanuel Kant "Objekttheorie"). I have to give the person the chance to confront with my accusation and give the chance to potentially defend himself/herself before the action in a hearing or after the action in an appeal. Basically what we call in german Menschenwürdegarantie and Rechtstaatlichkeit/Recht auf Anhörung/Rechtsweggarantie.

This is what I considered in this policy. We close the PR and write the reason in a neutral way. The contributor can basically "appeal" in the PR.

And again: just because us law allows wild west mentality doesnt mean that it is the global standard. In contrary. Judges have the tendency to declare the right to decide about any case in front of them. Like Marbury vs. Madison 1803 of the US supreme court for declaring to have the right for judicial review of federal laws. Or various decisions of the german constitutional court at the beginning 1950s to clarify that it has the right to declare basically the decisions of other courts, laws of the Bundestag, decrees of ministries and single decisions of the administration void and null. Or the Costa/Enel decision of the european court of justice, for declaring european law as supreme to national law, and thus making the european court of justice the strongest court in europe.

So while in the US you have wild west mentality, we have the rule of law in continental europe. And dont forget that most devs of fastify are living in europe.

I would actually call fastify european.

So we have to look at european law.

Read also following article:

https://esportslegal.news/2025/06/04/gaming-bans-and-gamer-and-dsa/#the-german-perspective-courts-weigh-in

We have more digital rights in europe than you in the US.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Certainly I'm not advocating mistreating contributors :-)

I would not call our mentality "wild west", and I think generalizing the mentality of entire countries is contradicting the CoC.

The project is part of the OpenJS Foundation, which is based in the US, so while the project shouldn't ignore the laws of any country, it is not a "european project". Open source projects don't have nationalities.

Comment on lines +80 to +81
1. Collaborators can request that the contributor disclose any use of AI tools,
irrespective of the significance of the AI usage.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This closes the gap created by only making it mandatory to document significant use of AI. The "irrespective of the significance of the AI usage." is just for clarity.

Also because the contributor has the burden of proof, caollaborators can doubt that the contributor disclosed completely.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be replaced with a new tick in the PR template:

  • I did not/did use AI tools to implement this feature

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again: Are you interested only getting significant use disclosed or do you which to know if copilot was used as glorified autocomplete function?

Also I recommend to use the negation, as people tend to not use the checkboxes.

Comment on lines +82 to +84
1. The contributor can discuss the reasons with the collaborators and provide
proper evidence of non-usage of AI tools or update the pull request
to comply with these rules.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize this is missing something which I had actually in mind.

Suggested change
1. The contributor can discuss the reasons with the collaborators and provide
proper evidence of non-usage of AI tools or update the pull request
to comply with these rules.
1. The contributor can discuss the reasons with the collaborators and provide
proper evidence of non-usage of AI tools in the corresponding
pull request or update the pull request to comply with these rules.

So we allow discussing the issue, but only in the affected pull request. I want to avoid that the contributor opens an issue to discuss the problem. It should be encapsulated in the pull request.

Comment on lines +85 to +86
1. The collaborators are encouraged but not obligated to discuss their
assessments and/or decisions with the contributor.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In my initial draft I wrote, that the collaborator is not obligated to discuss.

This is based from the ratio, which i found in "The Art of Being Right" by Schopenhauer, where he wrotes for his last strategem, that you should keep it like Aristoteteles.

The only safe rule, therefore, is that which Aristotle mentions in the last chapter of his Topica : not to dispute with the first person you meet, but only with those of your acquaintance of whom you know that they possess sufficient intelligence and self-respect not to advance absurdities ; to appeal to reason and not to authority, and to listen to reason and yield to it ; and, finally, to cherish truth, to be willing to accept reason even from an opponent, and to be just enough to bear being proved to be in the wrong, should truth lie with him. From this it follows that scarcely one man in a hundred is worth your disputing with him. You may let the remainder say what they please, for every one is at liberty to be a fool desipere est jus gentium. Remember what Voltaire says : La paix vaut encore mieux que la vérité. Remember also an Arabian proverb which tells us that on the tree of silence there hangs its fruit, which is peace.

Translation from german to english here

But I added that collabators are encouraged to discuss for clarity. It is the mirror rule for the collaborator for the rule above which allows the contributor to discuss. If the points are reasonable, then why not?

But if we get BS arguments, then we should just not engage in discussions.

Comment on lines +87 to +88
1. Repeated violations of these rules may result in a temporary or permanent ban
from contributing to the project.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the ultima ratio sanction.

I personally consider proposing AI slop an act of sabotage. So why should we assess again and again time wasting PRs?

Copy link
Member

@jean-michelet jean-michelet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't really have the skills or time necessary to properly evaluate this policy.
I gave some naive feedback...

I still believe that a policy on AI tools serves no real purpose, other than to justify closing contributors' PRs.
But are we obliged to provide a reason for doing so?
If not, we could simply flag these PRs and delete them after a while.

Comment on lines +64 to +66
1. The contributor clearly documents the significant use of AI tools in the
commit message or pull request description, including the name of the tool
used and a brief description of how it was used.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • Isn't stating the use and naming the tool problematic in terms of copyright?
  • How it was used?

Comment on lines +67 to +68
1. The contributor ensures that the use of AI tools does not violate any
licensing or copyright restrictions.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the contributor implicitly states by providing the PR that to his/her best knowledge no violation of license or copyright exists.

Isn't this redundant with general copyright policy? AI or not, users should not violate copyright.

Comment on lines +69 to +70
1. The contributor bears the burden of proof if undocumented AI use is
suspected.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would simply say that maintenance maintainers can close a PR if they suspect AI usage, without justification.
That's how it will play out in reality.

Copy link
Member

@Eomm Eomm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general LGTM

A bit of rendoundant sometimes and it may be useful to add some tick in the PR template as well

Comment on lines +64 to +66
1. The contributor clearly documents the significant use of AI tools in the
commit message or pull request description, including the name of the tool
used and a brief description of how it was used.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we care about this info?

Comment on lines +69 to +70
1. The contributor bears the burden of proof if undocumented AI use is
suspected.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In italy we call this "prova diabolica" (diabolic proof) because it is impossible to proof that you did not do something.

I agree with the @jean-michelet comment

Comment on lines +74 to +76
1. Collaborators are allowed to close pull requests that do not comply with
the contribution policy with a brief comment regarding the reason for
closing.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems harsh: I would re-phrase that to:

Collaborators are allowed to close pull requests that are stale and don't have feedback addressed

Not AI, related but it can be applied for AI too

Comment on lines +80 to +81
1. Collaborators can request that the contributor disclose any use of AI tools,
irrespective of the significance of the AI usage.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be replaced with a new tick in the PR template:

  • I did not/did use AI tools to implement this feature

@Eomm Eomm added the documentation Improvements or additions to documentation label Oct 15, 2025
Uzlopak and others added 4 commits October 15, 2025 10:11
@jsumners
Copy link
Member

I still believe that a policy on AI tools serves no real purpose, other than to justify closing contributors' PRs.
But are we obliged to provide a reason for doing so?
If not, we could simply flag these PRs and delete them after a while.

The purpose is twofold:

  1. Provide contributors with clear documentation on our position.
  2. Provide a document to link to when people complain.

Copy link
Member

@jsumners jsumners left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would rather see the touch-ups/refactorings as a separate PR. Basically, we are being asked to consider two different changes in a single PR:

  1. Some auto-generated code policy.
  2. Tweaks to the existing document for clarity.

Comment on lines +1 to +31
# Contribution Policy

## What?
> *"Fastify is an OPEN Open Source Project."*
Individuals making significant and valuable contributions are given
## Scope of this Policy

This policy applies to every repository in the
[Fastify GitHub organization](https://github.com/orgs/fastify/repositories).

## Who can contribute?

Anyone is welcome to contribute to the Fastify project, regardless of experience
level.

For more details, see our [informal contributing guide](./docs/Guides/Contributing.md).

### How to become a collaborator?

Individuals making significant and valuable contributions can be given
commit-access to the project to contribute as they see fit. This project is more
like an open wiki than a standard guarded open source project.

See our [informal contributing guide](./docs/Guides/Contributing.md) for more
details on contributing to this project.
If you think you meet the above criteria and we have not invited you yet, then
feel free to reach out to a [Lead Maintainer](https://github.com/fastify/fastify#team)
privately with some links to contributions, which you consider as significant
and valuable.

### I want to be a collaborator!
We will assess your contributions and, in a reasonable time, get back to you
with our decision.

If you think you meet the above criteria and we have not invited you yet, we are
sorry! Feel free to reach out to a [Lead
Maintainer](https://github.com/fastify/fastify#team) privately with a few links
to your valuable contributions. Read the [GOVERNANCE](GOVERNANCE.md) to get more
information.
Read the [GOVERNANCE](GOVERNANCE.md) to get more information.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am in favor of these changes with my suggestions applied.

Comment on lines +25 to +26
privately with some links to contributions, which you consider as significant
and valuable.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
privately with some links to contributions, which you consider as significant
and valuable.
privately with some links to contributions which you consider as significant
and valuable.

Comment on lines +28 to +29
We will assess your contributions and, in a reasonable time, get back to you
with our decision.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
We will assess your contributions and, in a reasonable time, get back to you
with our decision.
Your contributions will be reviewed according to maintainer availability,
and a decision will be subsequently communicated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good suggestion

Comment on lines +48 to +50
If any of the CI services is failing for reasons not related to the changes in
the pull request a core maintainer has to document the reason of the failure
in the pull request before merging it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary.

Suggested change
If any of the CI services is failing for reasons not related to the changes in
the pull request a core maintainer has to document the reason of the failure
in the pull request before merging it.

the pull request a core maintainer has to document the reason of the failure
in the pull request before merging it.
1. Only a lead maintainer is allowed to merge pull requests with SemVer-major
changes into the `main`-branch of fastify core.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
changes into the `main`-branch of fastify core.
changes into the `main`-branch of the Fastify core repository.


Declaring formal releases remains the prerogative of the lead maintainers. Do
not bump version numbers in pull requests.
not bump version numbers in the corresponding `package.json` in pull requests.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is unnecessary.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
not bump version numbers in the corresponding `package.json` in pull requests.
not bump version numbers in pull requests.

Comment on lines +116 to +120
1. PR opened by bots as part of the ci services (like Dependabot) can be merged
if the CI services are green and the Node.js versions supported are the same
as the plugin. If any of the CI services is failing for reasons not related
to the changes in the pull request a maintainer has to document the reason of
the failure by commenting in the pull request before merging it.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change is not needed.

Comment on lines +124 to +126
Any feedback is welcome! This document may also be subject to pull requests or
changes by contributors where you believe you have something valuable to add or
change.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing the leading clause changes the intent, and I don't see what the remaining changes do. I'd just strike them all.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. It is kind of redundant. The old paragraph was not fitting so i adapted it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Any feedback is welcome! This document may also be subject to pull requests or
changes by contributors where you believe you have something valuable to add or
change.

@Uzlopak
Copy link
Contributor Author

Uzlopak commented Oct 19, 2025

@Summers

This is not a auto generated code policy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Adopt prohibition on probabilistic generated code

9 participants