A rapid policy reversal on public code

NHS England is withdrawing software it has written from public view and telling staff that source code repositories must be private by default, according to guidance described in the supplied source text. The move marks a sharp shift for an institution whose software has traditionally been made public on the grounds that it was funded by taxpayers and could be reused by others.

The change is being driven by concern that increasingly capable artificial intelligence systems could ingest public code, infer weaknesses, and help attackers identify vulnerabilities. The new guidance reportedly sets an 11 May deadline for making repositories private unless there is an explicitly approved reason to keep them public.

The AI trigger: Mythos

The source text says NHS England specifically cited an Anthropic AI system called Mythos as a reason for the new posture. The guidance argues that public repositories increase the risk of disclosing not just source code, but also architectural decisions, configuration details, and contextual information that could be exploited as AI systems improve at large-scale code analysis and reasoning.

That concern reflects a broader shift in cybersecurity thinking. For years, defenders have worried about human attackers trawling through exposed systems and code. The new fear is that AI could automate parts of that work, rapidly processing software artifacts at a scale that compresses the time between exposure and exploitation.

Why critics say the move may backfire

Security experts quoted in the supplied reporting argue the policy is unnecessary and counterproductive. One reason is that open-source software has long rested on a different theory of safety: public visibility can improve quality because more people can inspect, test, and fix code. Closing repositories may reduce transparency without actually eliminating vulnerabilities.

The source text also notes that the UK government-backed AI Security Institute examined Mythos and concluded it was capable of attacking only “small, weakly defended and vulnerable enterprise systems,” with no sign that a genuinely secure system or network would be broadly at risk. If that assessment is accurate, then NHS England’s response may be disproportionate to the demonstrated threat.

Open government versus defensive secrecy

This dispute lands at the intersection of two policy instincts that are now colliding more often. One says publicly funded digital infrastructure should be shared openly to avoid duplication, improve public services, and let others build on state-funded work. The other says defensive secrecy is becoming more valuable as AI lowers the cost of reconnaissance for attackers.

NHS England’s new rule clearly favors the second view, at least for now. But the tradeoff is significant. Once code is closed by default, collaboration becomes harder, outside review narrows, and the public has less visibility into software that may shape healthcare operations and data systems.

A preview of a wider debate

The NHS decision matters beyond Britain because many public institutions are asking versions of the same question. Should AI-era security threats change the default assumptions behind open-source publication? Or does retreating from openness sacrifice long-term resilience for a short-term feeling of control?

The answer is unlikely to be one-size-fits-all. Some code bases may indeed expose sensitive operational details that should not be public. Others may become less safe when external scrutiny disappears. The challenge is distinguishing between those cases with evidence rather than fear.

What the decision signals

The immediate signal is that advanced AI models are already influencing real institutional policy, even when the technical evidence remains contested. NHS England is not waiting for a settled consensus before changing how its software is handled. It is moving preemptively toward a default-closed position.

Whether that approach improves security remains uncertain. What is certain is that the AI security debate has moved beyond theory. It is now reshaping procurement rules, publication standards, and the boundaries of digital transparency inside public-sector organizations.

If more governments follow the NHS path, one of the biggest indirect effects of AI may be a quieter internet: less public code, fewer open repositories, and a redefinition of what public digital infrastructure is supposed to look like.

This article is based on reporting by New Scientist. Read the original article.

Originally published on newscientist.com