AI Writing Advice From People Who Never Agreed to Give It

Grammarly has drawn sharp criticism after its "expert review" feature was found to generate AI writing feedback attributed to real journalists, editors, and academics — none of whom gave the company permission to use their names and likenesses. The feature, which launched in August 2025, offers users writing advice "inspired by" subject matter experts, but the execution has raised serious questions about consent and identity rights in the age of AI.

The controversy escalated when The Verge discovered that its own editorial staff had been included as "experts" in the system. Editor-in-chief Nilay Patel, editor-at-large David Pierce, and senior editors Sean Hollister and Tom Warren all appeared as available reviewers within Grammarly's interface — none of whom had any relationship with the feature or had granted Grammarly permission to use their identities.

Beyond Journalists: Deceased Academics as AI Proxies

As Wired first reported, the issue extends far beyond living journalists. Grammarly's expert review system also includes recently deceased professors, effectively resurrecting their professional personas as AI-generated writing coaches. The inclusion of dead academics raises particularly thorny ethical questions, as these individuals can never consent to having their expertise and reputation leveraged by an AI system.

The feature works by analyzing writing and generating feedback comments that appear to come from these "experts." Users see suggestions framed as reviews from specific named individuals, creating an impression of personal endorsement that does not actually exist. The AI generates the commentary, but the attribution to real people lends it an air of authority that generic AI feedback would lack.

How the Feature Works

  • Users can select from a roster of subject matter "experts" to review their writing
  • The AI generates feedback styled as if it came from the selected expert
  • Comments appear with the expert's name, creating an illusion of personal review
  • No consent was obtained from the individuals whose names are used

The Legal and Ethical Minefield

The practice sits at the intersection of several emerging legal battlegrounds. Right of publicity laws, which protect individuals from unauthorized commercial use of their identity, vary by jurisdiction but could potentially apply. Several states have strengthened these protections in recent years, partly in response to concerns about AI-generated deepfakes and digital impersonation.

For Grammarly, a company that built its reputation on helping people communicate more effectively, the irony is significant. The expert review feature essentially communicates under false pretenses, presenting AI-generated content as if it carries the endorsement of respected professionals. This sits uncomfortably alongside the company's stated mission of improving communication clarity and authenticity.

A Symptom of a Larger Problem

The Grammarly incident is part of a broader pattern of AI companies using real people's work, likeness, and reputation without explicit permission. From training data controversies to synthetic voice cloning, the technology industry has repeatedly pushed the boundaries of what it considers fair use of personal identity and intellectual output.

What makes Grammarly's case particularly notable is the directness of the attribution. This is not a question of training data buried in a model's weights — these are named individuals whose professional reputations are being actively leveraged to sell a product. The feature transforms real people into unwitting brand ambassadors for AI-generated advice they had no hand in creating.

As AI companies continue to integrate real-world personas into their products, the tension between technological capability and ethical responsibility will only intensify. Grammarly's expert review feature may represent one of the more visible examples, but it is unlikely to be the last. The question now is whether regulatory frameworks and industry norms can evolve quickly enough to protect individuals whose identities become raw material for AI systems.

This article is based on reporting by The Verge. Read the original article.