Whistleblowing Goes Global
In recent weeks, the AI industry has witnessed a growing wave of public dissent from within its own ranks. Former safety workers and researchers at some of the world's most prominent AI companies, including OpenAI and Anthropic, have gone public with concerns about how their employers handle safety testing, deploy models, and respond to internal warnings about potential risks. These disclosures have sparked intense debate about the adequacy of AI safety practices and whether the industry's rapid pace of development is outstripping its ability to ensure that powerful systems are deployed responsibly.
Now a new initiative is attempting to formalize and protect this kind of internal reporting on a global scale. Psst, a digital safe reporting platform, allows AI workers anywhere in the world to document and submit safety concerns through a secure channel, even in jurisdictions that lack robust whistleblower protection laws. The platform's founding board member, attorney Mary Inman, says the goal is to ensure that workers at AI companies can speak up about potential harms without fear of retaliation, regardless of where they are based.
Why Geography Matters
Whistleblower protections vary enormously across countries. In the United States, federal and state laws offer some protections for employees who report wrongdoing, though their effectiveness and scope are subjects of ongoing debate. In the European Union, a whistleblower directive adopted in 2019 provides a baseline of protections across member states, although implementation has been uneven.
But AI development is a global activity. Major AI labs operate research offices and hire talent across dozens of countries, many of which have minimal or no whistleblower protection laws. A safety researcher in Singapore, India, or the United Arab Emirates who discovers concerning practices at their employer may have no legal avenue to report those concerns without risking their career — or worse.
Psst is designed to fill this gap by providing a technology-based solution to a governance problem. By offering encrypted, anonymous reporting channels that are accessible from any country, the platform aims to create a safety net that operates independently of any national legal framework. Reports submitted through the platform can be routed to appropriate regulatory bodies, academic researchers, or public interest organizations depending on the nature and severity of the concern.
The Wave of AI Safety Disclosures
The timing of Psst's emergence is significant. The past year has seen an unprecedented volume of public disclosures by current and former employees of leading AI companies. Mrinank Sharma's departure from Anthropic and subsequent public statements about safety practices drew widespread attention, as did multiple former OpenAI employees who questioned whether the company's commercial pressures were compromising its safety commitments.
These disclosures have typically come from individuals with the financial security, immigration status, and professional reputation to absorb the personal costs of speaking out. The vast majority of AI workers who harbor similar concerns lack these protections and remain silent. Psst's thesis is that the disclosed concerns represent only the tip of an iceberg, and that a secure reporting mechanism could surface a much broader picture of safety issues across the industry.
The Challenge of Verification
One of the fundamental challenges facing any whistleblower platform is verification. Anonymous reports, while protecting the reporter, can be difficult to corroborate and easy to dismiss. Companies accused of safety lapses can argue that anonymous claims lack credibility, while the absence of a named accuser makes it harder for regulators or journalists to investigate.
Psst is attempting to address this by building relationships with trusted intermediaries who can evaluate the credibility of reports without exposing the identity of the reporter. The platform also encourages workers to submit documentation — internal emails, test results, meeting notes, policy documents — that can substantiate their concerns independently of their personal testimony.
Industry and Regulatory Response
The reaction from AI companies to the growing whistleblower movement has been mixed. Some firms have publicly committed to protecting employees who raise safety concerns through internal channels, while others have used non-disclosure agreements and other legal instruments that critics say have a chilling effect on internal dissent.
Regulators are watching closely. The European Union's AI Act includes provisions related to transparency and accountability that could create formal channels for safety reporting. In the United States, congressional hearings on AI safety have touched on the need for whistleblower protections specific to the AI industry, though no comprehensive legislation has been enacted.
What This Means for AI Development
The emergence of dedicated AI whistleblower infrastructure reflects a maturing of the AI safety debate from abstract philosophical discussions to practical governance questions. As AI systems become more powerful and more deeply integrated into critical infrastructure, healthcare, finance, and defense applications, the consequences of inadequate safety practices become increasingly severe.
Psst and similar initiatives represent an acknowledgment that effective AI governance cannot rely solely on companies to police themselves or on governments to regulate from the outside. It requires mechanisms that empower the people closest to the technology — the researchers and engineers building these systems — to raise alarms when they see problems, without destroying their own careers in the process.
Whether such platforms can meaningfully influence industry behavior remains to be seen. But in an era when the pace of AI development consistently outstrips the pace of AI regulation, whistleblower platforms may serve as an important early warning system, surfacing concerns that might otherwise remain hidden until they manifest as real-world harms.
This article is based on reporting by Rest of World. Read the original article.




