Hidden in the Supply Chain

An investigation by Rest of World has revealed that gig workers across Africa who were hired through data-labeling platform Appen to perform routine annotation tasks — identifying objects in images, transcribing audio, categorizing text — were unknowingly contributing to AI systems used by the United States military. The workers, many of whom were paid a few dollars per hour, had no idea that their labor was feeding into defense and intelligence applications.

The revelation exposes a troubling aspect of the AI supply chain: the vast workforce of human annotators whose labor is essential for training machine learning systems is often kept deliberately in the dark about how their work is ultimately used. The disconnect between the people who label the data and the organizations that deploy the resulting AI systems raises serious ethical questions about informed consent, labor practices, and the hidden human infrastructure of military technology.

How Data Labeling for the Military Works

Modern AI systems, particularly those used for image recognition, natural language processing, and decision support, require enormous quantities of labeled training data. Someone must look at thousands of satellite images and draw boxes around vehicles. Someone must listen to hours of audio and transcribe what they hear. Someone must read text and categorize it by topic, sentiment, or intent.

This work is typically outsourced through a chain of intermediaries. A defense contractor might hire a technology company to develop an AI system. That company might subcontract the data labeling to a platform like Appen, which in turn distributes the work to freelancers around the world, many of them in countries where labor costs are a fraction of what they are in the United States or Europe.

At each step in this chain, the ultimate end use of the data becomes more obscured. The gig workers at the bottom of the pyramid see individual tasks — label this image, transcribe this audio clip — without context about the broader system they are helping to build. Appen's terms of service and non-disclosure agreements often prohibit workers from knowing the identity of the end client, let alone the application their work supports.

What the Workers Were Labeling

The investigation found that African gig workers were performing a variety of annotation tasks that align with known military AI applications. These included identifying and classifying objects in aerial and satellite imagery — a capability central to military surveillance and targeting systems. Workers were also involved in transcribing and categorizing communications data, and in labeling geospatial features in mapping imagery.

None of the workers interviewed by Rest of World were told that their work was connected to military or intelligence applications. Several expressed shock and discomfort upon learning the ultimate use of their labor, with some saying they would not have accepted the work had they known.

The ethical implications are particularly pointed given the geopolitical context. Some of the workers are based in countries that have experienced U.S. military operations or that have complicated relationships with American foreign policy. The idea that their labor could be contributing to military capabilities directed at regions similar to their own communities was deeply troubling to several of the workers interviewed.

  • Gig workers in Africa were hired through Appen to label data that fed into U.S. military AI systems
  • Workers were paid a fraction of Western wages and had no knowledge of the military end use
  • The multi-layered subcontracting chain deliberately obscures the ultimate application of data labeling work
  • Workers expressed shock and discomfort upon learning how their labor was being used

Appen's Role in the AI Supply Chain

Appen, an Australian company that was once one of the largest data annotation platforms in the world, has long served as a critical intermediary in the AI supply chain. The company maintained a global workforce of over a million contractors at its peak, providing labeled data to technology companies, government agencies, and defense contractors.

The company has faced financial difficulties in recent years as the data labeling industry has become more competitive and as some AI companies have moved annotation work in-house. But its historical contracts with defense and intelligence clients mean that significant quantities of data labeled by its global workforce have already been incorporated into military AI systems.

Appen's defenders argue that the company operates within the law and that its contracts with clients include appropriate provisions for data security and confidentiality. Critics counter that confidentiality provisions that prevent workers from knowing what they are working on are inherently exploitative, particularly when the work involves military applications that the workers might find morally objectionable.

The Ethics of Invisible Labor

The investigation highlights a broader ethical challenge in the AI industry. The technology sector has been remarkably effective at rendering the human labor behind AI systems invisible. When a military AI system correctly identifies a target in a satellite image, the credit goes to the algorithm and the engineers who designed it. The thousands of human annotators whose labor made the system possible are rarely acknowledged, let alone consulted about how the system is used.

Labor rights advocates have called for greater transparency in the AI supply chain, including requirements that data labeling workers be informed about the general category of application their work supports. Some have proposed certification schemes, similar to fair trade labels, that would verify that AI training data was produced under ethical labor conditions with informed worker consent.

Implications for AI Governance

The revelation also has implications for the growing international debate about AI governance. As governments develop frameworks for regulating AI systems, the question of how training data is sourced and labeled has received relatively little attention compared to issues like algorithmic bias and safety testing.

The use of unwitting foreign labor to train military AI systems could become a flashpoint in international negotiations over AI governance, particularly as developing nations push for greater recognition of their role in — and greater benefit from — the global AI economy. If the workers who make AI possible don't even know what they're building, the foundation of the AI industry rests on a troubling moral asymmetry.

For the gig workers in Africa who discovered the true nature of their work, the experience crystallized a growing awareness that the global AI economy depends on their labor but does not feel obligated to include them in decisions about how that labor is used.

This article is based on reporting by Rest of World. Read the original article.