The kernel's answer to AI coding tools

After months of debate, the Linux kernel project has formalized its first explicit policy for AI-assisted code contributions. As summarized by ZDNET, the new guidance reflects a practical compromise rather than a ban or an endorsement without limits. The core message is simple: developers may use AI tools, but they cannot transfer responsibility to them. In the Linux world, where code quality, licensing discipline, and review norms are unusually strict, that distinction is the entire point.

The policy establishes three principles. First, AI agents cannot add Signed-off-by tags, because only human contributors can certify compliance with the kernel's Developer Certificate of Origin. Second, AI-assisted submissions must include an Assisted-by tag naming the model, agent, and auxiliary tools involved. Third, the human submitter bears full responsibility for reviewing the code, ensuring license compliance, and owning any bugs or security flaws that result. These rules turn AI use from a hidden variable into a disclosed part of the contribution process.

The result is less about the romance of artificial intelligence than about preserving the kernel's chain of accountability. The Linux kernel is not just a software project. It is a legal and operational system with clear norms for provenance, review, and ownership. If generated code enters that system without transparent attribution, maintainers lose visibility into risk. The new Assisted-by requirement addresses that by giving reviewers a clear signal about how the patch was produced and where extra scrutiny may be needed.

Transparency over theater

ZDNET describes the approach as pragmatic, and that is the right frame. The policy does not pretend AI tools are absent from modern development, nor does it treat them as trusted peers. Instead, it classifies them as tools whose output must be disclosed and whose consequences remain human. That is probably the only position the kernel could credibly take. The project cannot allow an ambiguous authorship model when legal certification and technical review are so central to how changes are accepted.

The policy was shaped by controversy. The source text points to debate that intensified after Nvidia engineer and kernel developer Sasha Levin submitted a patch to Linux 6.15 that was generated entirely by AI, including changelog and tests, though he reviewed and tested the result before submission. That episode crystallized a question many software communities now face: if AI helps produce a patch, what must be disclosed, and who stands behind the work?

The kernel's answer is notable because it rejects both extremes. It does not require developers to avoid AI altogether, and it does not permit them to hide behind automation. That combination may prove influential beyond Linux. Many open-source and enterprise projects are still improvising their own norms around generated code. The kernel has now offered a model in which disclosure is mandatory and liability is nontransferable.

Why this matters for software governance

The broader significance is that AI-assisted coding is becoming a governance issue, not just a productivity issue. A patch that compiles is not necessarily a patch that should be trusted. Projects need to know where code came from, who reviewed it, and who can answer for it later. In high-stakes codebases, those questions are inseparable from security, maintainability, and legal compliance.

That is why the Assisted-by tag matters even if it seems like a small procedural detail. It gives maintainers context. It can shape how intensively a patch is reviewed. It may also discourage careless use of AI tools by making disclosure unavoidable. If contributors know that generated work will be flagged for extra scrutiny, they have a stronger incentive to review it rigorously before submission.

The kernel community's new rules do not solve every problem around AI-generated code. ZDNET notes that the policy may not address the biggest challenge. But it does settle one core principle: the machine can assist, the human must answer. In a software ecosystem built on trust through process, that is the rule that matters most.

Why this story matters

  • The Linux kernel has now codified a formal policy for AI-assisted contributions.
  • Human contributors remain legally and technically responsible for submitted code.
  • Mandatory Assisted-by attribution could influence other open-source governance models.

This article is based on reporting by ZDNET. Read the original article.