Press 2 for... English With a Spanish Accent

Washington state residents calling the Department of Licensing phone system encountered a peculiar problem: pressing 2 for Spanish service did not produce Spanish. Instead, callers heard English words spoken with a pronounced Spanish accent. Rather than translating the prompts into Spanish, someone had simply fed English text into Amazon's Polly text-to-speech system using a voice called "Lucia," designed for Castilian Spanish.

The result was a system that pronounced "Please press 1" as though a Spanish speaker were reading English words phonetically — something between comedy and bureaucratic dysfunction. A TikTok video by Maya Edwards documenting the issue went viral, attracting millions of views and widespread mockery of the state agency's implementation.

How the Error Actually Happened

Amazon Polly is a text-to-speech service that converts written text into spoken audio. It offers voices in dozens of languages, and the Lucia voice is specifically designed to read Spanish text aloud with natural Castilian Spanish pronunciation. The critical mistake was treating a text-to-speech system as though it were a translation system.

Instead of first translating the English prompts into Spanish and then feeding that Spanish text to the Lucia voice, someone at or contracted by the Department of Licensing simply pasted the original English text into the Lucia voice configuration. The system performed exactly as designed: it read the English words using Spanish phonetic rules, producing an accented version of English rather than actual Spanish-language content.

The error reveals a fundamental misunderstanding of how language technology works. Text-to-speech systems do not translate; they pronounce. Feeding English text to a Spanish voice is like handing sheet music written in one key to a musician tuned to another — the notes will be played, but the result will be wrong.

Broader Implications for Government AI Deployment

The Washington DMV incident, while humorous on its surface, points to a more serious issue: the gap between government agencies adopting AI-powered tools and their capacity to implement those tools correctly. As federal, state, and local governments increasingly turn to automated systems for citizen services, the potential for misconfiguration grows.

Language access is not a trivial matter. Executive orders and federal regulations require government agencies to provide meaningful access to services for individuals with limited English proficiency. A phone system that purports to offer Spanish service but actually delivers accented English fails this obligation entirely, potentially affecting thousands of Spanish-speaking residents who rely on the Department of Licensing for driver's licenses, vehicle registration, and identification services.

The Viral Response

The TikTok video that exposed the issue resonated far beyond Washington state. It touched on a widespread anxiety about AI systems being deployed without adequate quality assurance, particularly in government services that affect vulnerable populations. Commenters noted that the error suggested either no Spanish speaker tested the system before deployment, or testing was skipped entirely.

The incident joins a growing catalog of government technology failures that go viral on social media, from automated unemployment systems that wrongly denied benefits during the pandemic to chatbot deployments that provided incorrect information. Each incident erodes public trust in government's ability to harness technology effectively.

The Fix and the Bigger Question

The Washington Department of Licensing acknowledged the problem and stated it was "seeking to fix it and figure out how it happened in the first place." Reports varied on whether the issue had been fully resolved by late February 2026, suggesting the fix may have been more complex than simply uploading Spanish translations.

The bigger question is how many similar misconfigured AI systems exist across government agencies nationwide. Amazon Polly, Google Cloud Text-to-Speech, and similar services are being rapidly adopted by public agencies looking to automate citizen interactions. Without proper oversight, testing protocols, and technical expertise, each deployment carries the risk of similar errors — errors that may not always be as immediately obvious or amusing as speaking English with a Spanish accent.

A Lesson in Testing and Accountability

The fundamental lesson of the Washington DMV incident is simple but crucial: AI tools do exactly what they are configured to do, nothing more. They do not compensate for human errors in setup, and they do not flag when they are being used inappropriately. The responsibility for correct deployment lies entirely with the humans configuring these systems, and that responsibility demands adequate testing by people who actually speak the target language.

For government agencies across the country, the incident serves as a cautionary tale about the gap between purchasing AI technology and deploying it effectively. The tools are only as good as the implementation, and implementation requires expertise, testing, and accountability that no amount of technology can replace.

This article is based on reporting by Gizmodo. Read the original article.