The hidden workforce behind AI systems
The current wave of artificial intelligence is often described in terms of models, chips, and capital spending. Far less visible is the human workforce that helps train, police, and refine those systems every day. New reporting on layoffs affecting workers at Covalen, a Dublin-based Meta contractor, throws that hidden layer into view.
According to documents reviewed by WIRED, more than 700 Covalen employees in Ireland are at risk of losing their jobs. Roughly 500 of them are data annotators who help evaluate content generated by Meta’s AI systems against company rules for dangerous or illegal material. The workers were informed through a brief video meeting and, according to one employee account, were not allowed to ask questions.
The scale of the planned cuts matters because it illustrates a contradiction at the center of the AI economy. Meta is increasing spending on artificial intelligence, while a large pool of people doing the labor that makes those systems safer and more usable now faces uncertainty.
What the work actually involves
Data annotation and safety review are easy to describe abstractly and difficult to grasp concretely. In practice, workers may spend their days judging whether AI outputs violate rules, crafting prompts to probe a model’s safeguards, and documenting the “correct” decisions the system is expected to learn from.
Employee accounts cited in the reporting describe a form of labor that is both technically important and psychologically punishing. Some of the work reportedly involved attempting to bypass guardrails related to child sexual abuse material or suicide content so that Meta’s systems could be tested and improved. One worker described the job as grueling. Another summarized the broader dynamic bluntly: humans are training the AI that may eventually replace them.
That tension is not unique to Meta. It has become a defining feature of generative AI development. The public-facing story emphasizes autonomous systems, but those systems still depend on large numbers of people who label data, stress-test behavior, and make fine-grained judgments that become the basis for model tuning and policy enforcement.







