The automation story still depends on people

Massachusetts Sen. Ed Markey has put autonomous-vehicle companies under renewed scrutiny by focusing on a part of the industry that often sits outside public-facing marketing: the human staffers who help vehicles when the software gets confused or stuck. According to Fast Company, Markey’s office sent detailed questions to seven firms in February, including Waymo, Tesla, Zoox, Aurora, Motional, Nuro, and May Mobility.

The investigation found that none of the seven companies would disclose how often their vehicles require human help to recover from difficult situations. That refusal is significant because it leaves a basic public question unanswered. If a vehicle regularly needs remote guidance or intervention, the practical boundary between “autonomous” and “assisted by hidden labor” becomes less clear.

Markey’s office also identified wide variation in communication lag times between vehicles and the workers assisting them. In safety-critical systems, latency is not a technical footnote. It can shape whether a remote instruction arrives in time to matter at all.

Remote assistance is not remote driving, but it is still a safety issue

Companies in the sector generally argue that their remote teams are not directly driving the vehicles. Instead, they say those workers provide advice or contextual input while the onboard software remains in control and can reject suggestions. That distinction may be legally and technically important, but it does not eliminate the policy questions the investigation raises.

Human remote assistance still introduces operational dependencies: staffing quality, training, fatigue, connectivity, and escalation procedures. If a robotaxi or delivery vehicle stops unexpectedly, the speed and competence of the human support layer can affect traffic flow, emergency response, and public safety.

Fast Company notes that city officials have already raised concerns about unplanned stops and how they disrupt streets and emergency operations. Those incidents can force involvement not only from company personnel but also from local responders. In that context, remote-assistance systems become part of public infrastructure interactions, not just internal company processes.

The labor geography matters too

The investigation found that Waymo is the only company among those surveyed that relies on staffers based outside the United States to assist its driving system, and the only one that employs a large share of workers in this role without a U.S. driver’s license. Those details sharpen questions about standards. If companies use remote assistance differently, with different labor pools, training assumptions, and response times, then the industry may be developing without a consistent baseline for what safe support operations should look like.

That inconsistency is precisely what makes Markey’s inquiry more than a narrow political exercise. It points to a governance gap. Public discussion about autonomous vehicles often centers on sensors, software, and whether a person sits behind the wheel. Less attention goes to the invisible operations centers that help systems recover from ambiguity.

The broader implication is uncomfortable for the industry but useful for regulators and the public. Driverless systems are not simply replacing human judgment; in many cases they are redistributing it. Some of that judgment still exists, just farther away, mediated by networks, procedures, and workers who are easy to overlook.

Markey’s investigation does not show that autonomous vehicles cannot work. It does show that the current picture is more hybrid than many narratives imply. If the industry wants public trust, it may eventually need to be more explicit about where the humans still are, what they do, and how often the software still needs them.

This article is based on reporting by Fast Company. Read the original article.

Originally published on fastcompany.com