The hidden workforce behind AI systems
The current wave of artificial intelligence is often described in terms of models, chips, and capital spending. Far less visible is the human workforce that helps train, police, and refine those systems every day. New reporting on layoffs affecting workers at Covalen, a Dublin-based Meta contractor, throws that hidden layer into view.
According to documents reviewed by WIRED, more than 700 Covalen employees in Ireland are at risk of losing their jobs. Roughly 500 of them are data annotators who help evaluate content generated by Meta’s AI systems against company rules for dangerous or illegal material. The workers were informed through a brief video meeting and, according to one employee account, were not allowed to ask questions.
The scale of the planned cuts matters because it illustrates a contradiction at the center of the AI economy. Meta is increasing spending on artificial intelligence, while a large pool of people doing the labor that makes those systems safer and more usable now faces uncertainty.
What the work actually involves
Data annotation and safety review are easy to describe abstractly and difficult to grasp concretely. In practice, workers may spend their days judging whether AI outputs violate rules, crafting prompts to probe a model’s safeguards, and documenting the “correct” decisions the system is expected to learn from.
Employee accounts cited in the reporting describe a form of labor that is both technically important and psychologically punishing. Some of the work reportedly involved attempting to bypass guardrails related to child sexual abuse material or suicide content so that Meta’s systems could be tested and improved. One worker described the job as grueling. Another summarized the broader dynamic bluntly: humans are training the AI that may eventually replace them.
That tension is not unique to Meta. It has become a defining feature of generative AI development. The public-facing story emphasizes autonomous systems, but those systems still depend on large numbers of people who label data, stress-test behavior, and make fine-grained judgments that become the basis for model tuning and policy enforcement.
Layoffs amid a larger restructuring
The planned cuts at Covalen come as Meta pursues a broader efficiency push. The company recently announced layoffs affecting about one in 10 jobs, while also signaling a major increase in AI investment. In January, CEO Mark Zuckerberg reportedly said that 2026 would be the year AI begins to dramatically change the way people work.
That framing helps explain why contractor cuts matter. They are not just a labor relations story in Ireland. They are part of a structural shift in how major technology companies are reorganizing around AI. Money is moving toward infrastructure, model development, and strategic expansion. At the same time, some of the workforces that helped support those systems in their earlier phases are being squeezed.
In the email reviewed by WIRED, Covalen employees were told only that the decision was driven by “reduced demand and operational requirements.” That language is familiar corporate shorthand, but it does not resolve the underlying question of what role outsourced human review will play as the economics of AI change.
Why this matters for the future of AI labor
There is a recurring myth in AI discourse that the technology quickly becomes self-sustaining. In reality, the systems now being deployed at scale still rely heavily on human correction. People sort edge cases, interpret policies, rate outputs, and create examples of what a safe or useful response should look like. Those tasks are especially important when companies want to claim their models are robust against harmful content.
If those workers are cut aggressively, several possibilities follow:
- Companies may attempt to automate more of the evaluation process
- They may shift the labor to lower-cost contractors in other regions
- They may narrow the scope of human review to the most sensitive categories
- They may accept higher operational risk in exchange for lower labor costs
None of those paths is cost-free. Safety and quality work that looks “non-core” on a spreadsheet can turn out to be central once systems face public scrutiny, legal pressure, or harmful-use incidents.
The dignity question
The deeper issue raised by the Covalen story is not only employment, but dignity. Contractors who perform difficult moderation and annotation tasks frequently occupy an odd position in the AI hierarchy. Their work is indispensable but outsourced, intimate with a company’s systems but structurally distant from its public identity, and often framed as temporary even when it becomes a durable operating need.
That arrangement has allowed the AI industry to present itself as highly automated while depending on large pools of labor exposed to repetitive and sometimes traumatic material. When those workers are then told that efficiency requires cutting them with little notice or dialogue, the message is hard to miss.
Meta’s own spending priorities make the contrast sharper. A company willing to nearly double spending on AI is still treating a key segment of AI-enabling labor as expendable. That may make financial sense in the short term, but it raises harder questions about how the industry values the people who absorb the social and psychological burden of making AI workable.
A revealing moment for the industry
The Covalen layoffs are important not because they are unprecedented, but because they are clarifying. They reveal that the AI boom is not simply creating a new economy. It is reallocating risk, status, and bargaining power inside an already existing one.
As companies race to build more capable models, they are also deciding which human roles remain visible, which are outsourced, and which can be shed. Those decisions will shape not only the economics of AI, but its ethics. The workers now at risk in Ireland are a reminder that behind every polished AI product is still a human supply chain, and that chain can be cut even while the industry insists the future has never looked brighter.
This article is based on reporting by Wired. Read the original article.
Originally published on wired.com


