California is asserting its own authority over applied AI
California Governor Gavin Newsom has signed an executive order requiring companies with state contracts to implement safeguards against AI misuse. The order says contractors must ensure their AI systems do not generate illegal content, reinforce harmful biases, or violate civil rights. It also directs state agencies to watermark AI-generated images and videos to reduce misinformation risks.
The move is significant because it treats AI governance as a procurement issue as much as a technology issue. Instead of waiting for a comprehensive federal framework, California is using its buying power to set operational expectations for companies that want to do business with the state.
Procurement is becoming a regulatory tool
This matters because state contracting is one of the fastest ways governments can influence corporate behavior without passing a broad new law. If a company wants access to public-sector business, it may need to prove that its models and deployment practices meet specific safety and civil-rights standards. In effect, procurement becomes a compliance mechanism.
The executive order also requires California’s procurement and technology agencies to develop recommendations within 120 days for new AI certifications. Those certifications would allow companies to demonstrate adherence to responsible AI practices and public-safety protections.
That detail is especially important. Guidance alone can be vague; certification frameworks create a pathway toward auditable standards. Even if the details are still to come, California is signaling that self-attestation may not be enough for vendors handling sensitive public work.
The state is also challenging federal alignment
The order includes a separate provision addressing federal directives. If the U.S. government designates a company as a supply-chain risk, California says it will conduct its own review and may continue working with that vendor. The report frames this as a direct response to a Pentagon designation involving Anthropic, which bars government contractors from using Anthropic technology for U.S. military work.
That clause goes beyond routine procurement policy. It suggests California is willing to exercise independent judgment even when federal authorities reach a different conclusion. In practical terms, the state is reserving the right to define acceptable AI partners according to its own review process.
This is the clearest sign that AI regulation in the United States may become fragmented not only across agencies, but across levels of government. Washington can set one direction for federal contractors while a major state market sets another. For companies, that means compliance may become geographically layered.
The content rules reflect the current political center of AI risk
The safeguards in the order focus on three themes that have become central to mainstream AI governance debates: illegal outputs, bias and discrimination, and synthetic-media misuse. Those choices are revealing. California is not trying to regulate every possible technical question. It is concentrating on the categories most likely to generate public harm, legal exposure, and political conflict in deployed systems.
Watermarking AI-generated images and videos is also notable because it treats provenance as a public-interest issue. As synthetic media becomes easier to generate and harder to identify, governments are increasingly looking for practical methods to distinguish authentic material from algorithmic output. California is now trying to embed that norm into state agency operations.
The order does not solve the underlying technical challenges. Watermarks can be removed, inconsistent standards can create confusion, and model behavior remains difficult to audit across contexts. But the policy direction is clear: if AI is going to be used in public-facing or publicly funded settings, the state expects visible safeguards.
Why this could matter nationally
California often functions as a policy laboratory because of its size and economic weight. Requirements adopted for one large market can influence product design and governance practices well beyond state borders. Vendors serving California may find it more efficient to apply similar controls elsewhere rather than maintain separate compliance tracks.
That is why the order matters outside Sacramento. Even before the certification details are written, it tells AI companies that large public customers increasingly expect demonstrable safeguards, not just broad assurances. It also tells Washington that state governments are willing to move independently when they believe federal policy is insufficient or misaligned.
The result may be a more complicated regulatory map, but also a faster one. California is betting that waiting for a unified national framework is less practical than setting immediate terms for the systems it pays for. For AI developers and enterprise vendors, the message is straightforward: governance is no longer an abstract policy debate. It is becoming a condition of market access.
This article is based on reporting by The Decoder. Read the original article.
Originally published on the-decoder.com




